00:00:00.001 Started by upstream project "autotest-per-patch" build number 131305 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.055 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.651 The recommended git tool is: git 00:00:00.652 using credential 00000000-0000-0000-0000-000000000002 00:00:00.654 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.671 Fetching changes from the remote Git repository 00:00:00.674 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.686 Using shallow fetch with depth 1 00:00:00.686 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.686 > git --version # timeout=10 00:00:00.701 > git --version # 'git version 2.39.2' 00:00:00.701 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.713 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.713 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:09.040 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:09.056 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:09.072 Checking out Revision 58e4f482292076ec19d68e6712473e60ef956aed (FETCH_HEAD) 00:00:09.072 > git config core.sparsecheckout # timeout=10 00:00:09.086 > git read-tree -mu HEAD # timeout=10 00:00:09.104 > git checkout -f 58e4f482292076ec19d68e6712473e60ef956aed # timeout=5 00:00:09.123 Commit message: "packer: Fix typo in a package name" 00:00:09.123 > git rev-list --no-walk 58e4f482292076ec19d68e6712473e60ef956aed # timeout=10 00:00:09.241 [Pipeline] Start of Pipeline 00:00:09.252 [Pipeline] library 00:00:09.253 Loading library shm_lib@master 00:00:09.254 Library shm_lib@master is cached. Copying from home. 00:00:09.264 [Pipeline] node 00:00:24.265 Still waiting to schedule task 00:00:24.266 Waiting for next available executor on ‘vagrant-vm-host’ 00:09:43.631 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:09:43.633 [Pipeline] { 00:09:43.646 [Pipeline] catchError 00:09:43.647 [Pipeline] { 00:09:43.662 [Pipeline] wrap 00:09:43.676 [Pipeline] { 00:09:43.689 [Pipeline] stage 00:09:43.692 [Pipeline] { (Prologue) 00:09:43.716 [Pipeline] echo 00:09:43.717 Node: VM-host-SM0 00:09:43.725 [Pipeline] cleanWs 00:09:43.734 [WS-CLEANUP] Deleting project workspace... 00:09:43.734 [WS-CLEANUP] Deferred wipeout is used... 00:09:43.740 [WS-CLEANUP] done 00:09:43.944 [Pipeline] setCustomBuildProperty 00:09:44.042 [Pipeline] httpRequest 00:09:44.441 [Pipeline] echo 00:09:44.443 Sorcerer 10.211.164.101 is alive 00:09:44.453 [Pipeline] retry 00:09:44.456 [Pipeline] { 00:09:44.475 [Pipeline] httpRequest 00:09:44.482 HttpMethod: GET 00:09:44.483 URL: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:09:44.483 Sending request to url: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:09:44.484 Response Code: HTTP/1.1 200 OK 00:09:44.484 Success: Status code 200 is in the accepted range: 200,404 00:09:44.485 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:09:44.629 [Pipeline] } 00:09:44.644 [Pipeline] // retry 00:09:44.649 [Pipeline] sh 00:09:44.926 + tar --no-same-owner -xf jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:09:44.942 [Pipeline] httpRequest 00:09:45.338 [Pipeline] echo 00:09:45.340 Sorcerer 10.211.164.101 is alive 00:09:45.352 [Pipeline] retry 00:09:45.354 [Pipeline] { 00:09:45.369 [Pipeline] httpRequest 00:09:45.374 HttpMethod: GET 00:09:45.374 URL: http://10.211.164.101/packages/spdk_006f950ffa1b5fa2cd1036867edc2476dd486668.tar.gz 00:09:45.375 Sending request to url: http://10.211.164.101/packages/spdk_006f950ffa1b5fa2cd1036867edc2476dd486668.tar.gz 00:09:45.376 Response Code: HTTP/1.1 200 OK 00:09:45.376 Success: Status code 200 is in the accepted range: 200,404 00:09:45.377 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_006f950ffa1b5fa2cd1036867edc2476dd486668.tar.gz 00:09:47.667 [Pipeline] } 00:09:47.686 [Pipeline] // retry 00:09:47.695 [Pipeline] sh 00:09:47.978 + tar --no-same-owner -xf spdk_006f950ffa1b5fa2cd1036867edc2476dd486668.tar.gz 00:09:51.275 [Pipeline] sh 00:09:51.555 + git -C spdk log --oneline -n5 00:09:51.555 006f950ff bdev/nvme: interrupt mode for PCIe transport 00:09:51.555 77d6f342b nvme/poll_group: create and manage fd_group for nvme poll group 00:09:51.555 dd2806f90 lib/nvme: callback function to manage events 00:09:51.555 1fded1607 lib/nvme: add opts_size to spdk_nvme_io_qpair_opts 00:09:51.555 16f54d1e7 thread: Extended options for spdk_interrupt_register 00:09:51.575 [Pipeline] writeFile 00:09:51.591 [Pipeline] sh 00:09:51.872 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:09:51.884 [Pipeline] sh 00:09:52.165 + cat autorun-spdk.conf 00:09:52.165 SPDK_RUN_FUNCTIONAL_TEST=1 00:09:52.165 SPDK_TEST_NVMF=1 00:09:52.165 SPDK_TEST_NVMF_TRANSPORT=tcp 00:09:52.165 SPDK_TEST_URING=1 00:09:52.165 SPDK_TEST_USDT=1 00:09:52.165 SPDK_RUN_UBSAN=1 00:09:52.165 NET_TYPE=virt 00:09:52.165 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:09:52.171 RUN_NIGHTLY=0 00:09:52.173 [Pipeline] } 00:09:52.188 [Pipeline] // stage 00:09:52.204 [Pipeline] stage 00:09:52.208 [Pipeline] { (Run VM) 00:09:52.222 [Pipeline] sh 00:09:52.503 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:09:52.503 + echo 'Start stage prepare_nvme.sh' 00:09:52.503 Start stage prepare_nvme.sh 00:09:52.503 + [[ -n 4 ]] 00:09:52.503 + disk_prefix=ex4 00:09:52.503 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:09:52.503 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:09:52.503 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:09:52.503 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:09:52.503 ++ SPDK_TEST_NVMF=1 00:09:52.503 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:09:52.503 ++ SPDK_TEST_URING=1 00:09:52.503 ++ SPDK_TEST_USDT=1 00:09:52.503 ++ SPDK_RUN_UBSAN=1 00:09:52.503 ++ NET_TYPE=virt 00:09:52.503 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:09:52.503 ++ RUN_NIGHTLY=0 00:09:52.503 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:09:52.503 + nvme_files=() 00:09:52.503 + declare -A nvme_files 00:09:52.503 + backend_dir=/var/lib/libvirt/images/backends 00:09:52.503 + nvme_files['nvme.img']=5G 00:09:52.503 + nvme_files['nvme-cmb.img']=5G 00:09:52.503 + nvme_files['nvme-multi0.img']=4G 00:09:52.503 + nvme_files['nvme-multi1.img']=4G 00:09:52.503 + nvme_files['nvme-multi2.img']=4G 00:09:52.503 + nvme_files['nvme-openstack.img']=8G 00:09:52.503 + nvme_files['nvme-zns.img']=5G 00:09:52.503 + (( SPDK_TEST_NVME_PMR == 1 )) 00:09:52.503 + (( SPDK_TEST_FTL == 1 )) 00:09:52.503 + (( SPDK_TEST_NVME_FDP == 1 )) 00:09:52.503 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:09:52.503 + for nvme in "${!nvme_files[@]}" 00:09:52.503 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:09:52.503 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:09:52.503 + for nvme in "${!nvme_files[@]}" 00:09:52.503 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:09:52.503 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:09:52.503 + for nvme in "${!nvme_files[@]}" 00:09:52.503 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:09:52.503 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:09:52.503 + for nvme in "${!nvme_files[@]}" 00:09:52.503 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:09:52.503 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:09:52.503 + for nvme in "${!nvme_files[@]}" 00:09:52.503 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:09:52.503 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:09:52.503 + for nvme in "${!nvme_files[@]}" 00:09:52.503 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:09:52.503 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:09:52.503 + for nvme in "${!nvme_files[@]}" 00:09:52.503 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:09:53.877 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:09:53.877 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:09:53.877 + echo 'End stage prepare_nvme.sh' 00:09:53.877 End stage prepare_nvme.sh 00:09:53.888 [Pipeline] sh 00:09:54.210 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:09:54.210 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:09:54.210 00:09:54.210 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:09:54.210 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:09:54.210 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:09:54.210 HELP=0 00:09:54.210 DRY_RUN=0 00:09:54.210 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:09:54.210 NVME_DISKS_TYPE=nvme,nvme, 00:09:54.210 NVME_AUTO_CREATE=0 00:09:54.210 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:09:54.210 NVME_CMB=,, 00:09:54.210 NVME_PMR=,, 00:09:54.210 NVME_ZNS=,, 00:09:54.210 NVME_MS=,, 00:09:54.210 NVME_FDP=,, 00:09:54.210 SPDK_VAGRANT_DISTRO=fedora39 00:09:54.210 SPDK_VAGRANT_VMCPU=10 00:09:54.210 SPDK_VAGRANT_VMRAM=12288 00:09:54.210 SPDK_VAGRANT_PROVIDER=libvirt 00:09:54.210 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:09:54.210 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:09:54.210 SPDK_OPENSTACK_NETWORK=0 00:09:54.210 VAGRANT_PACKAGE_BOX=0 00:09:54.210 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:09:54.210 FORCE_DISTRO=true 00:09:54.210 VAGRANT_BOX_VERSION= 00:09:54.210 EXTRA_VAGRANTFILES= 00:09:54.210 NIC_MODEL=e1000 00:09:54.210 00:09:54.210 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:09:54.210 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:09:57.496 Bringing machine 'default' up with 'libvirt' provider... 00:09:58.494 ==> default: Creating image (snapshot of base box volume). 00:09:58.753 ==> default: Creating domain with the following settings... 00:09:58.753 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1729192327_09b67ee30dc270f9bf9e 00:09:58.753 ==> default: -- Domain type: kvm 00:09:58.753 ==> default: -- Cpus: 10 00:09:58.753 ==> default: -- Feature: acpi 00:09:58.753 ==> default: -- Feature: apic 00:09:58.753 ==> default: -- Feature: pae 00:09:58.753 ==> default: -- Memory: 12288M 00:09:58.753 ==> default: -- Memory Backing: hugepages: 00:09:58.753 ==> default: -- Management MAC: 00:09:58.753 ==> default: -- Loader: 00:09:58.753 ==> default: -- Nvram: 00:09:58.753 ==> default: -- Base box: spdk/fedora39 00:09:58.753 ==> default: -- Storage pool: default 00:09:58.753 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1729192327_09b67ee30dc270f9bf9e.img (20G) 00:09:58.753 ==> default: -- Volume Cache: default 00:09:58.753 ==> default: -- Kernel: 00:09:58.753 ==> default: -- Initrd: 00:09:58.753 ==> default: -- Graphics Type: vnc 00:09:58.753 ==> default: -- Graphics Port: -1 00:09:58.753 ==> default: -- Graphics IP: 127.0.0.1 00:09:58.753 ==> default: -- Graphics Password: Not defined 00:09:58.753 ==> default: -- Video Type: cirrus 00:09:58.753 ==> default: -- Video VRAM: 9216 00:09:58.753 ==> default: -- Sound Type: 00:09:58.753 ==> default: -- Keymap: en-us 00:09:58.753 ==> default: -- TPM Path: 00:09:58.753 ==> default: -- INPUT: type=mouse, bus=ps2 00:09:58.753 ==> default: -- Command line args: 00:09:58.753 ==> default: -> value=-device, 00:09:58.753 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:09:58.753 ==> default: -> value=-drive, 00:09:58.753 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:09:58.753 ==> default: -> value=-device, 00:09:58.753 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:09:58.753 ==> default: -> value=-device, 00:09:58.753 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:09:58.753 ==> default: -> value=-drive, 00:09:58.753 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:09:58.753 ==> default: -> value=-device, 00:09:58.753 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:09:58.753 ==> default: -> value=-drive, 00:09:58.753 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:09:58.753 ==> default: -> value=-device, 00:09:58.753 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:09:58.753 ==> default: -> value=-drive, 00:09:58.753 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:09:58.753 ==> default: -> value=-device, 00:09:58.753 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:09:59.011 ==> default: Creating shared folders metadata... 00:09:59.011 ==> default: Starting domain. 00:10:00.915 ==> default: Waiting for domain to get an IP address... 00:10:18.999 ==> default: Waiting for SSH to become available... 00:10:18.999 ==> default: Configuring and enabling network interfaces... 00:10:23.185 default: SSH address: 192.168.121.86:22 00:10:23.185 default: SSH username: vagrant 00:10:23.185 default: SSH auth method: private key 00:10:25.086 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:10:33.235 ==> default: Mounting SSHFS shared folder... 00:10:34.190 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:10:34.190 ==> default: Checking Mount.. 00:10:35.124 ==> default: Folder Successfully Mounted! 00:10:35.125 ==> default: Running provisioner: file... 00:10:36.059 default: ~/.gitconfig => .gitconfig 00:10:36.317 00:10:36.317 SUCCESS! 00:10:36.317 00:10:36.317 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:10:36.317 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:10:36.317 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:10:36.317 00:10:36.325 [Pipeline] } 00:10:36.342 [Pipeline] // stage 00:10:36.352 [Pipeline] dir 00:10:36.353 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:10:36.354 [Pipeline] { 00:10:36.368 [Pipeline] catchError 00:10:36.370 [Pipeline] { 00:10:36.382 [Pipeline] sh 00:10:36.697 + vagrant ssh-config --host vagrant 00:10:36.697 + sed -ne /^Host/,$p 00:10:36.697 + tee ssh_conf 00:10:39.976 Host vagrant 00:10:39.976 HostName 192.168.121.86 00:10:39.976 User vagrant 00:10:39.976 Port 22 00:10:39.976 UserKnownHostsFile /dev/null 00:10:39.976 StrictHostKeyChecking no 00:10:39.976 PasswordAuthentication no 00:10:39.976 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:10:39.976 IdentitiesOnly yes 00:10:39.976 LogLevel FATAL 00:10:39.976 ForwardAgent yes 00:10:39.976 ForwardX11 yes 00:10:39.976 00:10:39.989 [Pipeline] withEnv 00:10:39.992 [Pipeline] { 00:10:40.005 [Pipeline] sh 00:10:40.294 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:10:40.294 source /etc/os-release 00:10:40.294 [[ -e /image.version ]] && img=$(< /image.version) 00:10:40.294 # Minimal, systemd-like check. 00:10:40.294 if [[ -e /.dockerenv ]]; then 00:10:40.294 # Clear garbage from the node's name: 00:10:40.294 # agt-er_autotest_547-896 -> autotest_547-896 00:10:40.294 # $HOSTNAME is the actual container id 00:10:40.294 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:10:40.294 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:10:40.294 # We can assume this is a mount from a host where container is running, 00:10:40.294 # so fetch its hostname to easily identify the target swarm worker. 00:10:40.294 container="$(< /etc/hostname) ($agent)" 00:10:40.294 else 00:10:40.294 # Fallback 00:10:40.294 container=$agent 00:10:40.294 fi 00:10:40.294 fi 00:10:40.294 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:10:40.294 00:10:40.563 [Pipeline] } 00:10:40.581 [Pipeline] // withEnv 00:10:40.590 [Pipeline] setCustomBuildProperty 00:10:40.605 [Pipeline] stage 00:10:40.607 [Pipeline] { (Tests) 00:10:40.626 [Pipeline] sh 00:10:40.903 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:10:41.174 [Pipeline] sh 00:10:41.451 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:10:41.724 [Pipeline] timeout 00:10:41.725 Timeout set to expire in 1 hr 0 min 00:10:41.727 [Pipeline] { 00:10:41.741 [Pipeline] sh 00:10:42.023 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:10:42.624 HEAD is now at 006f950ff bdev/nvme: interrupt mode for PCIe transport 00:10:42.634 [Pipeline] sh 00:10:42.911 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:10:43.182 [Pipeline] sh 00:10:43.463 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:10:43.737 [Pipeline] sh 00:10:44.016 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:10:44.016 ++ readlink -f spdk_repo 00:10:44.275 + DIR_ROOT=/home/vagrant/spdk_repo 00:10:44.275 + [[ -n /home/vagrant/spdk_repo ]] 00:10:44.275 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:10:44.275 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:10:44.275 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:10:44.275 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:10:44.275 + [[ -d /home/vagrant/spdk_repo/output ]] 00:10:44.275 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:10:44.275 + cd /home/vagrant/spdk_repo 00:10:44.275 + source /etc/os-release 00:10:44.275 ++ NAME='Fedora Linux' 00:10:44.275 ++ VERSION='39 (Cloud Edition)' 00:10:44.275 ++ ID=fedora 00:10:44.275 ++ VERSION_ID=39 00:10:44.275 ++ VERSION_CODENAME= 00:10:44.275 ++ PLATFORM_ID=platform:f39 00:10:44.275 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:10:44.275 ++ ANSI_COLOR='0;38;2;60;110;180' 00:10:44.275 ++ LOGO=fedora-logo-icon 00:10:44.275 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:10:44.275 ++ HOME_URL=https://fedoraproject.org/ 00:10:44.275 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:10:44.275 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:10:44.275 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:10:44.275 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:10:44.275 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:10:44.275 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:10:44.275 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:10:44.275 ++ SUPPORT_END=2024-11-12 00:10:44.275 ++ VARIANT='Cloud Edition' 00:10:44.275 ++ VARIANT_ID=cloud 00:10:44.275 + uname -a 00:10:44.275 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:10:44.275 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:10:44.534 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:44.534 Hugepages 00:10:44.534 node hugesize free / total 00:10:44.534 node0 1048576kB 0 / 0 00:10:44.534 node0 2048kB 0 / 0 00:10:44.534 00:10:44.534 Type BDF Vendor Device NUMA Driver Device Block devices 00:10:44.792 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:10:44.792 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:10:44.792 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:10:44.792 + rm -f /tmp/spdk-ld-path 00:10:44.792 + source autorun-spdk.conf 00:10:44.792 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:10:44.792 ++ SPDK_TEST_NVMF=1 00:10:44.792 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:10:44.792 ++ SPDK_TEST_URING=1 00:10:44.792 ++ SPDK_TEST_USDT=1 00:10:44.792 ++ SPDK_RUN_UBSAN=1 00:10:44.792 ++ NET_TYPE=virt 00:10:44.792 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:10:44.792 ++ RUN_NIGHTLY=0 00:10:44.792 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:10:44.792 + [[ -n '' ]] 00:10:44.792 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:10:44.792 + for M in /var/spdk/build-*-manifest.txt 00:10:44.792 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:10:44.792 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:10:44.792 + for M in /var/spdk/build-*-manifest.txt 00:10:44.792 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:10:44.792 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:10:44.792 + for M in /var/spdk/build-*-manifest.txt 00:10:44.792 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:10:44.792 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:10:44.792 ++ uname 00:10:44.792 + [[ Linux == \L\i\n\u\x ]] 00:10:44.792 + sudo dmesg -T 00:10:44.792 + sudo dmesg --clear 00:10:44.792 + dmesg_pid=5258 00:10:44.792 + [[ Fedora Linux == FreeBSD ]] 00:10:44.792 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:44.792 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:44.792 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:10:44.792 + sudo dmesg -Tw 00:10:44.792 + [[ -x /usr/src/fio-static/fio ]] 00:10:44.792 + export FIO_BIN=/usr/src/fio-static/fio 00:10:44.792 + FIO_BIN=/usr/src/fio-static/fio 00:10:44.792 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:10:44.792 + [[ ! -v VFIO_QEMU_BIN ]] 00:10:44.792 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:10:44.792 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:44.792 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:44.792 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:10:44.792 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:44.792 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:44.792 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:10:44.792 Test configuration: 00:10:44.792 SPDK_RUN_FUNCTIONAL_TEST=1 00:10:44.792 SPDK_TEST_NVMF=1 00:10:44.792 SPDK_TEST_NVMF_TRANSPORT=tcp 00:10:44.792 SPDK_TEST_URING=1 00:10:44.792 SPDK_TEST_USDT=1 00:10:44.792 SPDK_RUN_UBSAN=1 00:10:44.792 NET_TYPE=virt 00:10:44.792 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:10:45.051 RUN_NIGHTLY=0 19:12:54 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:10:45.051 19:12:54 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:45.051 19:12:54 -- scripts/common.sh@15 -- $ shopt -s extglob 00:10:45.051 19:12:54 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:10:45.051 19:12:54 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:45.051 19:12:54 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:45.051 19:12:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.051 19:12:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.051 19:12:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.051 19:12:54 -- paths/export.sh@5 -- $ export PATH 00:10:45.051 19:12:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.051 19:12:54 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:10:45.051 19:12:54 -- common/autobuild_common.sh@486 -- $ date +%s 00:10:45.051 19:12:54 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729192374.XXXXXX 00:10:45.051 19:12:54 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729192374.ZXmCMO 00:10:45.051 19:12:54 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:10:45.051 19:12:54 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:10:45.051 19:12:54 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:10:45.051 19:12:54 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:10:45.051 19:12:54 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:10:45.051 19:12:54 -- common/autobuild_common.sh@502 -- $ get_config_params 00:10:45.051 19:12:54 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:10:45.051 19:12:54 -- common/autotest_common.sh@10 -- $ set +x 00:10:45.051 19:12:54 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:10:45.051 19:12:54 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:10:45.051 19:12:54 -- pm/common@17 -- $ local monitor 00:10:45.051 19:12:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:45.051 19:12:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:45.051 19:12:54 -- pm/common@25 -- $ sleep 1 00:10:45.051 19:12:54 -- pm/common@21 -- $ date +%s 00:10:45.051 19:12:54 -- pm/common@21 -- $ date +%s 00:10:45.051 19:12:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1729192374 00:10:45.051 19:12:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1729192374 00:10:45.051 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1729192374_collect-cpu-load.pm.log 00:10:45.051 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1729192374_collect-vmstat.pm.log 00:10:45.986 19:12:55 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:10:45.986 19:12:55 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:10:45.986 19:12:55 -- spdk/autobuild.sh@12 -- $ umask 022 00:10:45.986 19:12:55 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:10:45.986 19:12:55 -- spdk/autobuild.sh@16 -- $ date -u 00:10:45.986 Thu Oct 17 07:12:55 PM UTC 2024 00:10:45.986 19:12:55 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:10:45.986 v25.01-pre-87-g006f950ff 00:10:45.986 19:12:55 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:10:45.986 19:12:55 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:10:45.986 19:12:55 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:10:45.986 19:12:55 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:10:45.986 19:12:55 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:10:45.986 19:12:55 -- common/autotest_common.sh@10 -- $ set +x 00:10:45.986 ************************************ 00:10:45.986 START TEST ubsan 00:10:45.986 ************************************ 00:10:45.986 using ubsan 00:10:45.986 19:12:55 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:10:45.986 00:10:45.986 real 0m0.000s 00:10:45.986 user 0m0.000s 00:10:45.986 sys 0m0.000s 00:10:45.986 19:12:55 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:10:45.986 19:12:55 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:10:45.986 ************************************ 00:10:45.986 END TEST ubsan 00:10:45.986 ************************************ 00:10:45.986 19:12:55 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:10:45.986 19:12:55 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:10:45.986 19:12:55 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:10:45.986 19:12:55 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:10:45.986 19:12:55 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:10:45.986 19:12:55 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:10:45.986 19:12:55 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:10:45.986 19:12:55 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:10:45.986 19:12:55 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:10:46.244 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:46.244 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:10:46.503 Using 'verbs' RDMA provider 00:11:02.326 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:11:14.547 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:11:14.547 Creating mk/config.mk...done. 00:11:14.547 Creating mk/cc.flags.mk...done. 00:11:14.547 Type 'make' to build. 00:11:14.547 19:13:23 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:11:14.547 19:13:23 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:11:14.547 19:13:23 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:11:14.547 19:13:23 -- common/autotest_common.sh@10 -- $ set +x 00:11:14.547 ************************************ 00:11:14.548 START TEST make 00:11:14.548 ************************************ 00:11:14.548 19:13:23 make -- common/autotest_common.sh@1125 -- $ make -j10 00:11:14.548 make[1]: Nothing to be done for 'all'. 00:11:29.413 The Meson build system 00:11:29.413 Version: 1.5.0 00:11:29.413 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:11:29.413 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:11:29.413 Build type: native build 00:11:29.413 Program cat found: YES (/usr/bin/cat) 00:11:29.413 Project name: DPDK 00:11:29.413 Project version: 24.03.0 00:11:29.413 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:11:29.413 C linker for the host machine: cc ld.bfd 2.40-14 00:11:29.413 Host machine cpu family: x86_64 00:11:29.413 Host machine cpu: x86_64 00:11:29.413 Message: ## Building in Developer Mode ## 00:11:29.413 Program pkg-config found: YES (/usr/bin/pkg-config) 00:11:29.413 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:11:29.413 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:11:29.413 Program python3 found: YES (/usr/bin/python3) 00:11:29.413 Program cat found: YES (/usr/bin/cat) 00:11:29.413 Compiler for C supports arguments -march=native: YES 00:11:29.413 Checking for size of "void *" : 8 00:11:29.413 Checking for size of "void *" : 8 (cached) 00:11:29.413 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:11:29.413 Library m found: YES 00:11:29.413 Library numa found: YES 00:11:29.413 Has header "numaif.h" : YES 00:11:29.413 Library fdt found: NO 00:11:29.413 Library execinfo found: NO 00:11:29.413 Has header "execinfo.h" : YES 00:11:29.413 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:11:29.413 Run-time dependency libarchive found: NO (tried pkgconfig) 00:11:29.413 Run-time dependency libbsd found: NO (tried pkgconfig) 00:11:29.413 Run-time dependency jansson found: NO (tried pkgconfig) 00:11:29.413 Run-time dependency openssl found: YES 3.1.1 00:11:29.413 Run-time dependency libpcap found: YES 1.10.4 00:11:29.413 Has header "pcap.h" with dependency libpcap: YES 00:11:29.413 Compiler for C supports arguments -Wcast-qual: YES 00:11:29.413 Compiler for C supports arguments -Wdeprecated: YES 00:11:29.413 Compiler for C supports arguments -Wformat: YES 00:11:29.413 Compiler for C supports arguments -Wformat-nonliteral: NO 00:11:29.413 Compiler for C supports arguments -Wformat-security: NO 00:11:29.413 Compiler for C supports arguments -Wmissing-declarations: YES 00:11:29.413 Compiler for C supports arguments -Wmissing-prototypes: YES 00:11:29.413 Compiler for C supports arguments -Wnested-externs: YES 00:11:29.413 Compiler for C supports arguments -Wold-style-definition: YES 00:11:29.413 Compiler for C supports arguments -Wpointer-arith: YES 00:11:29.413 Compiler for C supports arguments -Wsign-compare: YES 00:11:29.413 Compiler for C supports arguments -Wstrict-prototypes: YES 00:11:29.413 Compiler for C supports arguments -Wundef: YES 00:11:29.413 Compiler for C supports arguments -Wwrite-strings: YES 00:11:29.413 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:11:29.413 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:11:29.413 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:11:29.413 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:11:29.413 Program objdump found: YES (/usr/bin/objdump) 00:11:29.413 Compiler for C supports arguments -mavx512f: YES 00:11:29.413 Checking if "AVX512 checking" compiles: YES 00:11:29.413 Fetching value of define "__SSE4_2__" : 1 00:11:29.413 Fetching value of define "__AES__" : 1 00:11:29.413 Fetching value of define "__AVX__" : 1 00:11:29.413 Fetching value of define "__AVX2__" : 1 00:11:29.413 Fetching value of define "__AVX512BW__" : (undefined) 00:11:29.413 Fetching value of define "__AVX512CD__" : (undefined) 00:11:29.413 Fetching value of define "__AVX512DQ__" : (undefined) 00:11:29.413 Fetching value of define "__AVX512F__" : (undefined) 00:11:29.413 Fetching value of define "__AVX512VL__" : (undefined) 00:11:29.413 Fetching value of define "__PCLMUL__" : 1 00:11:29.413 Fetching value of define "__RDRND__" : 1 00:11:29.413 Fetching value of define "__RDSEED__" : 1 00:11:29.413 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:11:29.413 Fetching value of define "__znver1__" : (undefined) 00:11:29.413 Fetching value of define "__znver2__" : (undefined) 00:11:29.413 Fetching value of define "__znver3__" : (undefined) 00:11:29.413 Fetching value of define "__znver4__" : (undefined) 00:11:29.413 Compiler for C supports arguments -Wno-format-truncation: YES 00:11:29.413 Message: lib/log: Defining dependency "log" 00:11:29.413 Message: lib/kvargs: Defining dependency "kvargs" 00:11:29.413 Message: lib/telemetry: Defining dependency "telemetry" 00:11:29.413 Checking for function "getentropy" : NO 00:11:29.413 Message: lib/eal: Defining dependency "eal" 00:11:29.413 Message: lib/ring: Defining dependency "ring" 00:11:29.413 Message: lib/rcu: Defining dependency "rcu" 00:11:29.413 Message: lib/mempool: Defining dependency "mempool" 00:11:29.413 Message: lib/mbuf: Defining dependency "mbuf" 00:11:29.413 Fetching value of define "__PCLMUL__" : 1 (cached) 00:11:29.413 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:11:29.413 Compiler for C supports arguments -mpclmul: YES 00:11:29.413 Compiler for C supports arguments -maes: YES 00:11:29.413 Compiler for C supports arguments -mavx512f: YES (cached) 00:11:29.413 Compiler for C supports arguments -mavx512bw: YES 00:11:29.413 Compiler for C supports arguments -mavx512dq: YES 00:11:29.413 Compiler for C supports arguments -mavx512vl: YES 00:11:29.413 Compiler for C supports arguments -mvpclmulqdq: YES 00:11:29.413 Compiler for C supports arguments -mavx2: YES 00:11:29.413 Compiler for C supports arguments -mavx: YES 00:11:29.413 Message: lib/net: Defining dependency "net" 00:11:29.413 Message: lib/meter: Defining dependency "meter" 00:11:29.413 Message: lib/ethdev: Defining dependency "ethdev" 00:11:29.413 Message: lib/pci: Defining dependency "pci" 00:11:29.413 Message: lib/cmdline: Defining dependency "cmdline" 00:11:29.413 Message: lib/hash: Defining dependency "hash" 00:11:29.413 Message: lib/timer: Defining dependency "timer" 00:11:29.413 Message: lib/compressdev: Defining dependency "compressdev" 00:11:29.413 Message: lib/cryptodev: Defining dependency "cryptodev" 00:11:29.413 Message: lib/dmadev: Defining dependency "dmadev" 00:11:29.413 Compiler for C supports arguments -Wno-cast-qual: YES 00:11:29.413 Message: lib/power: Defining dependency "power" 00:11:29.413 Message: lib/reorder: Defining dependency "reorder" 00:11:29.413 Message: lib/security: Defining dependency "security" 00:11:29.413 Has header "linux/userfaultfd.h" : YES 00:11:29.413 Has header "linux/vduse.h" : YES 00:11:29.413 Message: lib/vhost: Defining dependency "vhost" 00:11:29.413 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:11:29.413 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:11:29.413 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:11:29.413 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:11:29.413 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:11:29.413 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:11:29.413 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:11:29.413 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:11:29.413 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:11:29.413 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:11:29.413 Program doxygen found: YES (/usr/local/bin/doxygen) 00:11:29.414 Configuring doxy-api-html.conf using configuration 00:11:29.414 Configuring doxy-api-man.conf using configuration 00:11:29.414 Program mandb found: YES (/usr/bin/mandb) 00:11:29.414 Program sphinx-build found: NO 00:11:29.414 Configuring rte_build_config.h using configuration 00:11:29.414 Message: 00:11:29.414 ================= 00:11:29.414 Applications Enabled 00:11:29.414 ================= 00:11:29.414 00:11:29.414 apps: 00:11:29.414 00:11:29.414 00:11:29.414 Message: 00:11:29.414 ================= 00:11:29.414 Libraries Enabled 00:11:29.414 ================= 00:11:29.414 00:11:29.414 libs: 00:11:29.414 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:11:29.414 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:11:29.414 cryptodev, dmadev, power, reorder, security, vhost, 00:11:29.414 00:11:29.414 Message: 00:11:29.414 =============== 00:11:29.414 Drivers Enabled 00:11:29.414 =============== 00:11:29.414 00:11:29.414 common: 00:11:29.414 00:11:29.414 bus: 00:11:29.414 pci, vdev, 00:11:29.414 mempool: 00:11:29.414 ring, 00:11:29.414 dma: 00:11:29.414 00:11:29.414 net: 00:11:29.414 00:11:29.414 crypto: 00:11:29.414 00:11:29.414 compress: 00:11:29.414 00:11:29.414 vdpa: 00:11:29.414 00:11:29.414 00:11:29.414 Message: 00:11:29.414 ================= 00:11:29.414 Content Skipped 00:11:29.414 ================= 00:11:29.414 00:11:29.414 apps: 00:11:29.414 dumpcap: explicitly disabled via build config 00:11:29.414 graph: explicitly disabled via build config 00:11:29.414 pdump: explicitly disabled via build config 00:11:29.414 proc-info: explicitly disabled via build config 00:11:29.414 test-acl: explicitly disabled via build config 00:11:29.414 test-bbdev: explicitly disabled via build config 00:11:29.414 test-cmdline: explicitly disabled via build config 00:11:29.414 test-compress-perf: explicitly disabled via build config 00:11:29.414 test-crypto-perf: explicitly disabled via build config 00:11:29.414 test-dma-perf: explicitly disabled via build config 00:11:29.414 test-eventdev: explicitly disabled via build config 00:11:29.414 test-fib: explicitly disabled via build config 00:11:29.414 test-flow-perf: explicitly disabled via build config 00:11:29.414 test-gpudev: explicitly disabled via build config 00:11:29.414 test-mldev: explicitly disabled via build config 00:11:29.414 test-pipeline: explicitly disabled via build config 00:11:29.414 test-pmd: explicitly disabled via build config 00:11:29.414 test-regex: explicitly disabled via build config 00:11:29.414 test-sad: explicitly disabled via build config 00:11:29.414 test-security-perf: explicitly disabled via build config 00:11:29.414 00:11:29.414 libs: 00:11:29.414 argparse: explicitly disabled via build config 00:11:29.414 metrics: explicitly disabled via build config 00:11:29.414 acl: explicitly disabled via build config 00:11:29.414 bbdev: explicitly disabled via build config 00:11:29.414 bitratestats: explicitly disabled via build config 00:11:29.414 bpf: explicitly disabled via build config 00:11:29.414 cfgfile: explicitly disabled via build config 00:11:29.414 distributor: explicitly disabled via build config 00:11:29.414 efd: explicitly disabled via build config 00:11:29.414 eventdev: explicitly disabled via build config 00:11:29.414 dispatcher: explicitly disabled via build config 00:11:29.414 gpudev: explicitly disabled via build config 00:11:29.414 gro: explicitly disabled via build config 00:11:29.414 gso: explicitly disabled via build config 00:11:29.414 ip_frag: explicitly disabled via build config 00:11:29.414 jobstats: explicitly disabled via build config 00:11:29.414 latencystats: explicitly disabled via build config 00:11:29.414 lpm: explicitly disabled via build config 00:11:29.414 member: explicitly disabled via build config 00:11:29.414 pcapng: explicitly disabled via build config 00:11:29.414 rawdev: explicitly disabled via build config 00:11:29.414 regexdev: explicitly disabled via build config 00:11:29.414 mldev: explicitly disabled via build config 00:11:29.414 rib: explicitly disabled via build config 00:11:29.414 sched: explicitly disabled via build config 00:11:29.414 stack: explicitly disabled via build config 00:11:29.414 ipsec: explicitly disabled via build config 00:11:29.414 pdcp: explicitly disabled via build config 00:11:29.414 fib: explicitly disabled via build config 00:11:29.414 port: explicitly disabled via build config 00:11:29.414 pdump: explicitly disabled via build config 00:11:29.414 table: explicitly disabled via build config 00:11:29.414 pipeline: explicitly disabled via build config 00:11:29.414 graph: explicitly disabled via build config 00:11:29.414 node: explicitly disabled via build config 00:11:29.414 00:11:29.414 drivers: 00:11:29.414 common/cpt: not in enabled drivers build config 00:11:29.414 common/dpaax: not in enabled drivers build config 00:11:29.414 common/iavf: not in enabled drivers build config 00:11:29.414 common/idpf: not in enabled drivers build config 00:11:29.414 common/ionic: not in enabled drivers build config 00:11:29.414 common/mvep: not in enabled drivers build config 00:11:29.414 common/octeontx: not in enabled drivers build config 00:11:29.414 bus/auxiliary: not in enabled drivers build config 00:11:29.414 bus/cdx: not in enabled drivers build config 00:11:29.414 bus/dpaa: not in enabled drivers build config 00:11:29.414 bus/fslmc: not in enabled drivers build config 00:11:29.414 bus/ifpga: not in enabled drivers build config 00:11:29.414 bus/platform: not in enabled drivers build config 00:11:29.414 bus/uacce: not in enabled drivers build config 00:11:29.414 bus/vmbus: not in enabled drivers build config 00:11:29.414 common/cnxk: not in enabled drivers build config 00:11:29.414 common/mlx5: not in enabled drivers build config 00:11:29.414 common/nfp: not in enabled drivers build config 00:11:29.414 common/nitrox: not in enabled drivers build config 00:11:29.414 common/qat: not in enabled drivers build config 00:11:29.414 common/sfc_efx: not in enabled drivers build config 00:11:29.414 mempool/bucket: not in enabled drivers build config 00:11:29.414 mempool/cnxk: not in enabled drivers build config 00:11:29.414 mempool/dpaa: not in enabled drivers build config 00:11:29.414 mempool/dpaa2: not in enabled drivers build config 00:11:29.414 mempool/octeontx: not in enabled drivers build config 00:11:29.414 mempool/stack: not in enabled drivers build config 00:11:29.414 dma/cnxk: not in enabled drivers build config 00:11:29.414 dma/dpaa: not in enabled drivers build config 00:11:29.414 dma/dpaa2: not in enabled drivers build config 00:11:29.414 dma/hisilicon: not in enabled drivers build config 00:11:29.414 dma/idxd: not in enabled drivers build config 00:11:29.414 dma/ioat: not in enabled drivers build config 00:11:29.414 dma/skeleton: not in enabled drivers build config 00:11:29.414 net/af_packet: not in enabled drivers build config 00:11:29.414 net/af_xdp: not in enabled drivers build config 00:11:29.414 net/ark: not in enabled drivers build config 00:11:29.414 net/atlantic: not in enabled drivers build config 00:11:29.414 net/avp: not in enabled drivers build config 00:11:29.414 net/axgbe: not in enabled drivers build config 00:11:29.414 net/bnx2x: not in enabled drivers build config 00:11:29.414 net/bnxt: not in enabled drivers build config 00:11:29.414 net/bonding: not in enabled drivers build config 00:11:29.414 net/cnxk: not in enabled drivers build config 00:11:29.414 net/cpfl: not in enabled drivers build config 00:11:29.414 net/cxgbe: not in enabled drivers build config 00:11:29.414 net/dpaa: not in enabled drivers build config 00:11:29.414 net/dpaa2: not in enabled drivers build config 00:11:29.414 net/e1000: not in enabled drivers build config 00:11:29.414 net/ena: not in enabled drivers build config 00:11:29.414 net/enetc: not in enabled drivers build config 00:11:29.414 net/enetfec: not in enabled drivers build config 00:11:29.414 net/enic: not in enabled drivers build config 00:11:29.414 net/failsafe: not in enabled drivers build config 00:11:29.414 net/fm10k: not in enabled drivers build config 00:11:29.414 net/gve: not in enabled drivers build config 00:11:29.414 net/hinic: not in enabled drivers build config 00:11:29.414 net/hns3: not in enabled drivers build config 00:11:29.414 net/i40e: not in enabled drivers build config 00:11:29.414 net/iavf: not in enabled drivers build config 00:11:29.414 net/ice: not in enabled drivers build config 00:11:29.414 net/idpf: not in enabled drivers build config 00:11:29.414 net/igc: not in enabled drivers build config 00:11:29.414 net/ionic: not in enabled drivers build config 00:11:29.414 net/ipn3ke: not in enabled drivers build config 00:11:29.414 net/ixgbe: not in enabled drivers build config 00:11:29.414 net/mana: not in enabled drivers build config 00:11:29.414 net/memif: not in enabled drivers build config 00:11:29.414 net/mlx4: not in enabled drivers build config 00:11:29.414 net/mlx5: not in enabled drivers build config 00:11:29.414 net/mvneta: not in enabled drivers build config 00:11:29.414 net/mvpp2: not in enabled drivers build config 00:11:29.414 net/netvsc: not in enabled drivers build config 00:11:29.414 net/nfb: not in enabled drivers build config 00:11:29.414 net/nfp: not in enabled drivers build config 00:11:29.414 net/ngbe: not in enabled drivers build config 00:11:29.414 net/null: not in enabled drivers build config 00:11:29.414 net/octeontx: not in enabled drivers build config 00:11:29.414 net/octeon_ep: not in enabled drivers build config 00:11:29.414 net/pcap: not in enabled drivers build config 00:11:29.414 net/pfe: not in enabled drivers build config 00:11:29.414 net/qede: not in enabled drivers build config 00:11:29.414 net/ring: not in enabled drivers build config 00:11:29.414 net/sfc: not in enabled drivers build config 00:11:29.414 net/softnic: not in enabled drivers build config 00:11:29.414 net/tap: not in enabled drivers build config 00:11:29.414 net/thunderx: not in enabled drivers build config 00:11:29.414 net/txgbe: not in enabled drivers build config 00:11:29.414 net/vdev_netvsc: not in enabled drivers build config 00:11:29.414 net/vhost: not in enabled drivers build config 00:11:29.414 net/virtio: not in enabled drivers build config 00:11:29.414 net/vmxnet3: not in enabled drivers build config 00:11:29.414 raw/*: missing internal dependency, "rawdev" 00:11:29.414 crypto/armv8: not in enabled drivers build config 00:11:29.414 crypto/bcmfs: not in enabled drivers build config 00:11:29.414 crypto/caam_jr: not in enabled drivers build config 00:11:29.414 crypto/ccp: not in enabled drivers build config 00:11:29.414 crypto/cnxk: not in enabled drivers build config 00:11:29.414 crypto/dpaa_sec: not in enabled drivers build config 00:11:29.414 crypto/dpaa2_sec: not in enabled drivers build config 00:11:29.414 crypto/ipsec_mb: not in enabled drivers build config 00:11:29.414 crypto/mlx5: not in enabled drivers build config 00:11:29.414 crypto/mvsam: not in enabled drivers build config 00:11:29.415 crypto/nitrox: not in enabled drivers build config 00:11:29.415 crypto/null: not in enabled drivers build config 00:11:29.415 crypto/octeontx: not in enabled drivers build config 00:11:29.415 crypto/openssl: not in enabled drivers build config 00:11:29.415 crypto/scheduler: not in enabled drivers build config 00:11:29.415 crypto/uadk: not in enabled drivers build config 00:11:29.415 crypto/virtio: not in enabled drivers build config 00:11:29.415 compress/isal: not in enabled drivers build config 00:11:29.415 compress/mlx5: not in enabled drivers build config 00:11:29.415 compress/nitrox: not in enabled drivers build config 00:11:29.415 compress/octeontx: not in enabled drivers build config 00:11:29.415 compress/zlib: not in enabled drivers build config 00:11:29.415 regex/*: missing internal dependency, "regexdev" 00:11:29.415 ml/*: missing internal dependency, "mldev" 00:11:29.415 vdpa/ifc: not in enabled drivers build config 00:11:29.415 vdpa/mlx5: not in enabled drivers build config 00:11:29.415 vdpa/nfp: not in enabled drivers build config 00:11:29.415 vdpa/sfc: not in enabled drivers build config 00:11:29.415 event/*: missing internal dependency, "eventdev" 00:11:29.415 baseband/*: missing internal dependency, "bbdev" 00:11:29.415 gpu/*: missing internal dependency, "gpudev" 00:11:29.415 00:11:29.415 00:11:29.415 Build targets in project: 85 00:11:29.415 00:11:29.415 DPDK 24.03.0 00:11:29.415 00:11:29.415 User defined options 00:11:29.415 buildtype : debug 00:11:29.415 default_library : shared 00:11:29.415 libdir : lib 00:11:29.415 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:11:29.415 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:11:29.415 c_link_args : 00:11:29.415 cpu_instruction_set: native 00:11:29.415 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:11:29.415 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:11:29.415 enable_docs : false 00:11:29.415 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:11:29.415 enable_kmods : false 00:11:29.415 max_lcores : 128 00:11:29.415 tests : false 00:11:29.415 00:11:29.415 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:11:29.415 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:11:29.415 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:11:29.415 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:11:29.415 [3/268] Linking static target lib/librte_kvargs.a 00:11:29.415 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:11:29.415 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:11:29.415 [6/268] Linking static target lib/librte_log.a 00:11:29.415 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:11:29.415 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:11:29.415 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:11:29.415 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:11:29.415 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:11:29.415 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:11:29.415 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:11:29.415 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:11:29.415 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:11:29.415 [16/268] Linking static target lib/librte_telemetry.a 00:11:29.415 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:11:29.415 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:11:29.415 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:11:29.415 [20/268] Linking target lib/librte_log.so.24.1 00:11:29.675 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:11:29.675 [22/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:11:29.675 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:11:29.934 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:11:29.934 [25/268] Linking target lib/librte_kvargs.so.24.1 00:11:29.934 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:11:30.192 [27/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:11:30.192 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:11:30.192 [29/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:11:30.192 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:11:30.192 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:11:30.192 [32/268] Linking target lib/librte_telemetry.so.24.1 00:11:30.450 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:11:30.450 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:11:30.450 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:11:30.450 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:11:30.709 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:11:30.709 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:11:30.709 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:11:31.020 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:11:31.280 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:11:31.280 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:11:31.280 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:11:31.280 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:11:31.280 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:11:31.280 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:11:31.280 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:11:31.537 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:11:31.537 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:11:31.794 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:11:31.794 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:11:31.794 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:11:32.360 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:11:32.360 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:11:32.360 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:11:32.360 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:11:32.360 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:11:32.360 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:11:32.360 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:11:32.617 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:11:32.617 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:11:32.617 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:11:32.617 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:11:33.184 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:11:33.184 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:11:33.184 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:11:33.184 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:11:33.443 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:11:33.443 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:11:33.443 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:11:33.705 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:11:33.705 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:11:33.705 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:11:33.705 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:11:33.705 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:11:33.964 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:11:33.964 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:11:33.964 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:11:33.964 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:11:34.223 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:11:34.223 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:11:34.223 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:11:34.482 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:11:34.482 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:11:34.482 [85/268] Linking static target lib/librte_ring.a 00:11:34.740 [86/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:11:34.740 [87/268] Linking static target lib/librte_rcu.a 00:11:34.740 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:11:34.740 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:11:34.740 [90/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:11:34.740 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:11:34.740 [92/268] Linking static target lib/librte_eal.a 00:11:34.998 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:11:34.998 [94/268] Linking static target lib/librte_mempool.a 00:11:34.998 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:11:34.998 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:11:35.256 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:11:35.256 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:11:35.256 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:11:35.514 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:11:35.514 [101/268] Linking static target lib/librte_mbuf.a 00:11:35.514 [102/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:11:35.514 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:11:35.514 [104/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:11:35.772 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:11:35.772 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:11:36.030 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:11:36.030 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:11:36.030 [109/268] Linking static target lib/librte_net.a 00:11:36.290 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:11:36.290 [111/268] Linking static target lib/librte_meter.a 00:11:36.290 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:11:36.290 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:11:36.549 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:11:36.549 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:11:36.549 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:11:36.808 [117/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:11:36.808 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:11:36.808 [119/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:11:37.066 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:11:37.066 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:11:37.324 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:11:37.324 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:11:37.582 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:11:37.582 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:11:37.840 [126/268] Linking static target lib/librte_pci.a 00:11:37.840 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:11:37.840 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:11:38.098 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:11:38.098 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:11:38.098 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:11:38.098 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:11:38.098 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:11:38.098 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:11:38.098 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:11:38.098 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:11:38.098 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:11:38.356 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:11:38.356 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:11:38.356 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:11:38.356 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:11:38.356 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:11:38.356 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:11:38.356 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:11:38.356 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:11:38.356 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:11:38.614 [147/268] Linking static target lib/librte_ethdev.a 00:11:38.614 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:11:38.873 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:11:38.873 [150/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:11:38.873 [151/268] Linking static target lib/librte_cmdline.a 00:11:39.130 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:11:39.130 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:11:39.389 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:11:39.389 [155/268] Linking static target lib/librte_timer.a 00:11:39.389 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:11:39.389 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:11:39.647 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:11:39.647 [159/268] Linking static target lib/librte_hash.a 00:11:39.647 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:11:39.647 [161/268] Linking static target lib/librte_compressdev.a 00:11:39.647 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:11:39.905 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:11:40.164 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:11:40.164 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:11:40.164 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:11:40.164 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:11:40.423 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:11:40.423 [169/268] Linking static target lib/librte_dmadev.a 00:11:40.423 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:11:40.682 [171/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:11:40.682 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:11:40.682 [173/268] Linking static target lib/librte_cryptodev.a 00:11:40.682 [174/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:11:40.682 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:11:40.682 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:40.940 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:11:40.940 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:11:41.198 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:11:41.198 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:41.456 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:11:41.457 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:11:41.457 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:11:41.457 [184/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:11:41.715 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:11:41.715 [186/268] Linking static target lib/librte_power.a 00:11:41.715 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:11:41.715 [188/268] Linking static target lib/librte_reorder.a 00:11:42.281 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:11:42.281 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:11:42.281 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:11:42.281 [192/268] Linking static target lib/librte_security.a 00:11:42.281 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:11:42.539 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:11:42.539 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:11:43.105 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:11:43.105 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:11:43.105 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:11:43.105 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:11:43.105 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:11:43.105 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:43.363 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:11:43.621 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:11:43.621 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:11:43.621 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:11:43.879 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:11:43.879 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:11:43.879 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:11:43.879 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:11:44.138 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:11:44.138 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:11:44.138 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:11:44.138 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:11:44.138 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:11:44.138 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:11:44.138 [216/268] Linking static target drivers/librte_bus_vdev.a 00:11:44.396 [217/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:11:44.396 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:11:44.396 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:11:44.396 [220/268] Linking static target drivers/librte_bus_pci.a 00:11:44.396 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:11:44.396 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:11:44.654 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:44.654 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:11:44.654 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:11:44.654 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:11:44.654 [227/268] Linking static target drivers/librte_mempool_ring.a 00:11:44.912 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:11:45.479 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:11:45.479 [230/268] Linking static target lib/librte_vhost.a 00:11:46.046 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:11:46.303 [232/268] Linking target lib/librte_eal.so.24.1 00:11:46.303 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:11:46.304 [234/268] Linking target lib/librte_ring.so.24.1 00:11:46.304 [235/268] Linking target lib/librte_pci.so.24.1 00:11:46.304 [236/268] Linking target lib/librte_dmadev.so.24.1 00:11:46.304 [237/268] Linking target lib/librte_meter.so.24.1 00:11:46.304 [238/268] Linking target lib/librte_timer.so.24.1 00:11:46.562 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:11:46.562 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:11:46.562 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:11:46.562 [242/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:46.562 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:11:46.562 [244/268] Linking target lib/librte_rcu.so.24.1 00:11:46.562 [245/268] Linking target lib/librte_mempool.so.24.1 00:11:46.562 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:11:46.562 [247/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:11:46.562 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:11:46.820 [249/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:11:46.820 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:11:46.820 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:11:46.820 [252/268] Linking target lib/librte_mbuf.so.24.1 00:11:46.820 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:11:46.820 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:11:47.078 [255/268] Linking target lib/librte_compressdev.so.24.1 00:11:47.078 [256/268] Linking target lib/librte_net.so.24.1 00:11:47.078 [257/268] Linking target lib/librte_reorder.so.24.1 00:11:47.078 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:11:47.078 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:11:47.078 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:11:47.078 [261/268] Linking target lib/librte_cmdline.so.24.1 00:11:47.078 [262/268] Linking target lib/librte_hash.so.24.1 00:11:47.078 [263/268] Linking target lib/librte_security.so.24.1 00:11:47.078 [264/268] Linking target lib/librte_ethdev.so.24.1 00:11:47.337 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:11:47.337 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:11:47.337 [267/268] Linking target lib/librte_power.so.24.1 00:11:47.337 [268/268] Linking target lib/librte_vhost.so.24.1 00:11:47.337 INFO: autodetecting backend as ninja 00:11:47.337 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:12:19.429 CC lib/ut_mock/mock.o 00:12:19.429 CC lib/log/log.o 00:12:19.429 CC lib/log/log_flags.o 00:12:19.429 CC lib/log/log_deprecated.o 00:12:19.429 CC lib/ut/ut.o 00:12:19.429 LIB libspdk_ut.a 00:12:19.429 LIB libspdk_ut_mock.a 00:12:19.429 LIB libspdk_log.a 00:12:19.429 SO libspdk_ut.so.2.0 00:12:19.429 SO libspdk_ut_mock.so.6.0 00:12:19.429 SO libspdk_log.so.7.1 00:12:19.429 SYMLINK libspdk_ut_mock.so 00:12:19.429 SYMLINK libspdk_ut.so 00:12:19.429 SYMLINK libspdk_log.so 00:12:19.429 CC lib/ioat/ioat.o 00:12:19.429 CC lib/util/base64.o 00:12:19.429 CC lib/util/bit_array.o 00:12:19.429 CC lib/util/cpuset.o 00:12:19.429 CC lib/util/crc16.o 00:12:19.429 CC lib/util/crc32c.o 00:12:19.429 CC lib/util/crc32.o 00:12:19.429 CC lib/dma/dma.o 00:12:19.429 CXX lib/trace_parser/trace.o 00:12:19.429 CC lib/vfio_user/host/vfio_user_pci.o 00:12:19.429 CC lib/util/crc32_ieee.o 00:12:19.429 CC lib/vfio_user/host/vfio_user.o 00:12:19.429 CC lib/util/crc64.o 00:12:19.429 CC lib/util/dif.o 00:12:19.429 CC lib/util/fd.o 00:12:19.429 LIB libspdk_dma.a 00:12:19.429 CC lib/util/fd_group.o 00:12:19.429 SO libspdk_dma.so.5.0 00:12:19.429 CC lib/util/file.o 00:12:19.429 LIB libspdk_ioat.a 00:12:19.429 SYMLINK libspdk_dma.so 00:12:19.429 CC lib/util/hexlify.o 00:12:19.429 SO libspdk_ioat.so.7.0 00:12:19.429 CC lib/util/iov.o 00:12:19.429 LIB libspdk_vfio_user.a 00:12:19.430 CC lib/util/math.o 00:12:19.430 SYMLINK libspdk_ioat.so 00:12:19.430 CC lib/util/net.o 00:12:19.430 SO libspdk_vfio_user.so.5.0 00:12:19.430 CC lib/util/pipe.o 00:12:19.430 CC lib/util/strerror_tls.o 00:12:19.430 SYMLINK libspdk_vfio_user.so 00:12:19.430 CC lib/util/string.o 00:12:19.430 CC lib/util/uuid.o 00:12:19.430 CC lib/util/xor.o 00:12:19.430 CC lib/util/zipf.o 00:12:19.430 CC lib/util/md5.o 00:12:19.430 LIB libspdk_util.a 00:12:19.430 SO libspdk_util.so.10.1 00:12:19.430 LIB libspdk_trace_parser.a 00:12:19.430 SO libspdk_trace_parser.so.6.0 00:12:19.430 SYMLINK libspdk_util.so 00:12:19.430 SYMLINK libspdk_trace_parser.so 00:12:19.430 CC lib/rdma_utils/rdma_utils.o 00:12:19.430 CC lib/vmd/led.o 00:12:19.430 CC lib/vmd/vmd.o 00:12:19.430 CC lib/json/json_parse.o 00:12:19.430 CC lib/conf/conf.o 00:12:19.430 CC lib/json/json_write.o 00:12:19.430 CC lib/json/json_util.o 00:12:19.430 CC lib/env_dpdk/env.o 00:12:19.430 CC lib/rdma_provider/common.o 00:12:19.430 CC lib/idxd/idxd.o 00:12:19.430 CC lib/idxd/idxd_user.o 00:12:19.430 LIB libspdk_conf.a 00:12:19.430 CC lib/rdma_provider/rdma_provider_verbs.o 00:12:19.430 CC lib/idxd/idxd_kernel.o 00:12:19.430 SO libspdk_conf.so.6.0 00:12:19.430 LIB libspdk_rdma_utils.a 00:12:19.430 CC lib/env_dpdk/memory.o 00:12:19.430 SYMLINK libspdk_conf.so 00:12:19.430 SO libspdk_rdma_utils.so.1.0 00:12:19.430 LIB libspdk_json.a 00:12:19.430 CC lib/env_dpdk/pci.o 00:12:19.430 SO libspdk_json.so.6.0 00:12:19.430 SYMLINK libspdk_rdma_utils.so 00:12:19.430 CC lib/env_dpdk/init.o 00:12:19.430 CC lib/env_dpdk/threads.o 00:12:19.430 CC lib/env_dpdk/pci_ioat.o 00:12:19.430 LIB libspdk_rdma_provider.a 00:12:19.430 SYMLINK libspdk_json.so 00:12:19.430 CC lib/env_dpdk/pci_virtio.o 00:12:19.430 SO libspdk_rdma_provider.so.6.0 00:12:19.430 SYMLINK libspdk_rdma_provider.so 00:12:19.430 CC lib/env_dpdk/pci_vmd.o 00:12:19.430 LIB libspdk_idxd.a 00:12:19.430 CC lib/env_dpdk/pci_idxd.o 00:12:19.430 SO libspdk_idxd.so.12.1 00:12:19.430 LIB libspdk_vmd.a 00:12:19.430 CC lib/env_dpdk/pci_event.o 00:12:19.430 SO libspdk_vmd.so.6.0 00:12:19.430 CC lib/jsonrpc/jsonrpc_server.o 00:12:19.430 CC lib/env_dpdk/sigbus_handler.o 00:12:19.430 SYMLINK libspdk_idxd.so 00:12:19.430 CC lib/env_dpdk/pci_dpdk.o 00:12:19.430 CC lib/env_dpdk/pci_dpdk_2207.o 00:12:19.430 CC lib/env_dpdk/pci_dpdk_2211.o 00:12:19.430 SYMLINK libspdk_vmd.so 00:12:19.430 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:12:19.430 CC lib/jsonrpc/jsonrpc_client.o 00:12:19.430 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:12:19.430 LIB libspdk_jsonrpc.a 00:12:19.430 SO libspdk_jsonrpc.so.6.0 00:12:19.430 SYMLINK libspdk_jsonrpc.so 00:12:19.430 CC lib/rpc/rpc.o 00:12:19.430 LIB libspdk_env_dpdk.a 00:12:19.430 SO libspdk_env_dpdk.so.15.1 00:12:19.430 LIB libspdk_rpc.a 00:12:19.430 SO libspdk_rpc.so.6.0 00:12:19.430 SYMLINK libspdk_env_dpdk.so 00:12:19.430 SYMLINK libspdk_rpc.so 00:12:19.430 CC lib/keyring/keyring.o 00:12:19.430 CC lib/keyring/keyring_rpc.o 00:12:19.430 CC lib/trace/trace.o 00:12:19.430 CC lib/trace/trace_flags.o 00:12:19.430 CC lib/trace/trace_rpc.o 00:12:19.430 CC lib/notify/notify.o 00:12:19.430 CC lib/notify/notify_rpc.o 00:12:19.430 LIB libspdk_notify.a 00:12:19.430 SO libspdk_notify.so.6.0 00:12:19.430 SYMLINK libspdk_notify.so 00:12:19.430 LIB libspdk_keyring.a 00:12:19.430 LIB libspdk_trace.a 00:12:19.430 SO libspdk_keyring.so.2.0 00:12:19.430 SO libspdk_trace.so.11.0 00:12:19.430 SYMLINK libspdk_keyring.so 00:12:19.430 SYMLINK libspdk_trace.so 00:12:19.430 CC lib/sock/sock.o 00:12:19.430 CC lib/thread/thread.o 00:12:19.430 CC lib/sock/sock_rpc.o 00:12:19.430 CC lib/thread/iobuf.o 00:12:19.997 LIB libspdk_sock.a 00:12:19.997 SO libspdk_sock.so.10.0 00:12:20.256 SYMLINK libspdk_sock.so 00:12:20.515 CC lib/nvme/nvme_ctrlr_cmd.o 00:12:20.515 CC lib/nvme/nvme_ns_cmd.o 00:12:20.515 CC lib/nvme/nvme_fabric.o 00:12:20.515 CC lib/nvme/nvme_ctrlr.o 00:12:20.515 CC lib/nvme/nvme_ns.o 00:12:20.515 CC lib/nvme/nvme_qpair.o 00:12:20.515 CC lib/nvme/nvme_pcie_common.o 00:12:20.515 CC lib/nvme/nvme_pcie.o 00:12:20.515 CC lib/nvme/nvme.o 00:12:21.450 LIB libspdk_thread.a 00:12:21.450 SO libspdk_thread.so.10.2 00:12:21.450 CC lib/nvme/nvme_quirks.o 00:12:21.450 CC lib/nvme/nvme_transport.o 00:12:21.450 CC lib/nvme/nvme_discovery.o 00:12:21.450 SYMLINK libspdk_thread.so 00:12:21.450 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:12:21.450 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:12:21.450 CC lib/nvme/nvme_tcp.o 00:12:21.450 CC lib/accel/accel.o 00:12:21.709 CC lib/accel/accel_rpc.o 00:12:21.709 CC lib/accel/accel_sw.o 00:12:21.967 CC lib/nvme/nvme_opal.o 00:12:21.967 CC lib/nvme/nvme_io_msg.o 00:12:21.967 CC lib/nvme/nvme_poll_group.o 00:12:22.252 CC lib/nvme/nvme_zns.o 00:12:22.252 CC lib/nvme/nvme_stubs.o 00:12:22.252 CC lib/nvme/nvme_auth.o 00:12:22.252 CC lib/blob/blobstore.o 00:12:22.522 CC lib/blob/request.o 00:12:22.522 LIB libspdk_accel.a 00:12:22.522 SO libspdk_accel.so.16.0 00:12:22.782 CC lib/nvme/nvme_cuse.o 00:12:22.782 SYMLINK libspdk_accel.so 00:12:22.782 CC lib/blob/zeroes.o 00:12:22.782 CC lib/blob/blob_bs_dev.o 00:12:22.782 CC lib/nvme/nvme_rdma.o 00:12:23.041 CC lib/init/json_config.o 00:12:23.041 CC lib/virtio/virtio.o 00:12:23.041 CC lib/virtio/virtio_vhost_user.o 00:12:23.041 CC lib/virtio/virtio_vfio_user.o 00:12:23.041 CC lib/fsdev/fsdev.o 00:12:23.041 CC lib/bdev/bdev.o 00:12:23.300 CC lib/init/subsystem.o 00:12:23.300 CC lib/bdev/bdev_rpc.o 00:12:23.300 CC lib/bdev/bdev_zone.o 00:12:23.300 CC lib/fsdev/fsdev_io.o 00:12:23.300 CC lib/virtio/virtio_pci.o 00:12:23.559 CC lib/init/subsystem_rpc.o 00:12:23.559 CC lib/bdev/part.o 00:12:23.559 CC lib/init/rpc.o 00:12:23.559 CC lib/bdev/scsi_nvme.o 00:12:23.559 CC lib/fsdev/fsdev_rpc.o 00:12:23.817 LIB libspdk_virtio.a 00:12:23.817 SO libspdk_virtio.so.7.0 00:12:23.817 LIB libspdk_init.a 00:12:23.817 SYMLINK libspdk_virtio.so 00:12:23.817 SO libspdk_init.so.6.0 00:12:23.817 LIB libspdk_fsdev.a 00:12:23.817 SO libspdk_fsdev.so.1.0 00:12:23.817 SYMLINK libspdk_init.so 00:12:24.075 SYMLINK libspdk_fsdev.so 00:12:24.075 CC lib/event/app.o 00:12:24.075 CC lib/event/reactor.o 00:12:24.075 CC lib/event/log_rpc.o 00:12:24.075 CC lib/event/scheduler_static.o 00:12:24.075 CC lib/event/app_rpc.o 00:12:24.075 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:12:24.334 LIB libspdk_nvme.a 00:12:24.592 LIB libspdk_event.a 00:12:24.592 SO libspdk_nvme.so.15.0 00:12:24.850 SO libspdk_event.so.14.0 00:12:24.850 SYMLINK libspdk_event.so 00:12:24.850 SYMLINK libspdk_nvme.so 00:12:25.108 LIB libspdk_fuse_dispatcher.a 00:12:25.108 SO libspdk_fuse_dispatcher.so.1.0 00:12:25.108 SYMLINK libspdk_fuse_dispatcher.so 00:12:25.366 LIB libspdk_blob.a 00:12:25.624 SO libspdk_blob.so.11.0 00:12:25.624 SYMLINK libspdk_blob.so 00:12:25.882 CC lib/lvol/lvol.o 00:12:25.882 CC lib/blobfs/tree.o 00:12:25.882 CC lib/blobfs/blobfs.o 00:12:26.140 LIB libspdk_bdev.a 00:12:26.140 SO libspdk_bdev.so.17.0 00:12:26.398 SYMLINK libspdk_bdev.so 00:12:26.658 CC lib/scsi/dev.o 00:12:26.658 CC lib/scsi/lun.o 00:12:26.658 CC lib/scsi/port.o 00:12:26.658 CC lib/scsi/scsi.o 00:12:26.658 CC lib/nvmf/ctrlr.o 00:12:26.658 CC lib/nbd/nbd.o 00:12:26.658 CC lib/ftl/ftl_core.o 00:12:26.658 CC lib/ublk/ublk.o 00:12:26.658 CC lib/nbd/nbd_rpc.o 00:12:26.917 CC lib/ftl/ftl_init.o 00:12:26.917 LIB libspdk_blobfs.a 00:12:26.917 CC lib/ftl/ftl_layout.o 00:12:26.917 SO libspdk_blobfs.so.10.0 00:12:26.917 CC lib/nvmf/ctrlr_discovery.o 00:12:26.917 CC lib/nvmf/ctrlr_bdev.o 00:12:27.174 SYMLINK libspdk_blobfs.so 00:12:27.174 CC lib/ftl/ftl_debug.o 00:12:27.174 LIB libspdk_lvol.a 00:12:27.174 LIB libspdk_nbd.a 00:12:27.174 CC lib/scsi/scsi_bdev.o 00:12:27.174 SO libspdk_nbd.so.7.0 00:12:27.174 SO libspdk_lvol.so.10.0 00:12:27.174 CC lib/nvmf/subsystem.o 00:12:27.174 SYMLINK libspdk_lvol.so 00:12:27.174 CC lib/nvmf/nvmf.o 00:12:27.174 SYMLINK libspdk_nbd.so 00:12:27.174 CC lib/scsi/scsi_pr.o 00:12:27.432 CC lib/scsi/scsi_rpc.o 00:12:27.432 CC lib/ftl/ftl_io.o 00:12:27.432 CC lib/scsi/task.o 00:12:27.432 CC lib/nvmf/nvmf_rpc.o 00:12:27.706 CC lib/nvmf/transport.o 00:12:27.706 CC lib/ftl/ftl_sb.o 00:12:27.706 CC lib/nvmf/tcp.o 00:12:27.706 CC lib/nvmf/stubs.o 00:12:27.706 CC lib/ublk/ublk_rpc.o 00:12:27.972 LIB libspdk_scsi.a 00:12:27.972 CC lib/ftl/ftl_l2p.o 00:12:27.972 SO libspdk_scsi.so.9.0 00:12:27.972 LIB libspdk_ublk.a 00:12:27.972 SYMLINK libspdk_scsi.so 00:12:27.972 CC lib/nvmf/mdns_server.o 00:12:27.972 SO libspdk_ublk.so.3.0 00:12:28.230 CC lib/ftl/ftl_l2p_flat.o 00:12:28.230 CC lib/nvmf/rdma.o 00:12:28.230 SYMLINK libspdk_ublk.so 00:12:28.230 CC lib/nvmf/auth.o 00:12:28.230 CC lib/ftl/ftl_nv_cache.o 00:12:28.488 CC lib/ftl/ftl_band.o 00:12:28.488 CC lib/ftl/ftl_band_ops.o 00:12:28.488 CC lib/ftl/ftl_writer.o 00:12:28.488 CC lib/ftl/ftl_rq.o 00:12:28.754 CC lib/ftl/ftl_reloc.o 00:12:28.754 CC lib/ftl/ftl_l2p_cache.o 00:12:28.754 CC lib/ftl/ftl_p2l.o 00:12:28.754 CC lib/ftl/ftl_p2l_log.o 00:12:28.754 CC lib/iscsi/conn.o 00:12:29.013 CC lib/iscsi/init_grp.o 00:12:29.013 CC lib/ftl/mngt/ftl_mngt.o 00:12:29.013 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:12:29.270 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:12:29.270 CC lib/iscsi/iscsi.o 00:12:29.270 CC lib/iscsi/param.o 00:12:29.270 CC lib/ftl/mngt/ftl_mngt_startup.o 00:12:29.270 CC lib/ftl/mngt/ftl_mngt_md.o 00:12:29.270 CC lib/ftl/mngt/ftl_mngt_misc.o 00:12:29.527 CC lib/iscsi/portal_grp.o 00:12:29.527 CC lib/iscsi/tgt_node.o 00:12:29.527 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:12:29.527 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:12:29.527 CC lib/vhost/vhost.o 00:12:29.527 CC lib/vhost/vhost_rpc.o 00:12:29.785 CC lib/vhost/vhost_scsi.o 00:12:29.785 CC lib/iscsi/iscsi_subsystem.o 00:12:29.785 CC lib/vhost/vhost_blk.o 00:12:29.785 CC lib/vhost/rte_vhost_user.o 00:12:29.785 CC lib/ftl/mngt/ftl_mngt_band.o 00:12:30.043 CC lib/iscsi/iscsi_rpc.o 00:12:30.301 CC lib/iscsi/task.o 00:12:30.301 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:12:30.301 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:12:30.301 LIB libspdk_nvmf.a 00:12:30.301 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:12:30.301 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:12:30.301 CC lib/ftl/utils/ftl_conf.o 00:12:30.559 CC lib/ftl/utils/ftl_md.o 00:12:30.559 SO libspdk_nvmf.so.19.1 00:12:30.559 CC lib/ftl/utils/ftl_mempool.o 00:12:30.559 CC lib/ftl/utils/ftl_bitmap.o 00:12:30.559 CC lib/ftl/utils/ftl_property.o 00:12:30.817 SYMLINK libspdk_nvmf.so 00:12:30.817 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:12:30.817 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:12:30.817 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:12:30.817 LIB libspdk_iscsi.a 00:12:30.817 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:12:30.817 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:12:30.817 SO libspdk_iscsi.so.8.0 00:12:30.817 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:12:30.817 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:12:30.817 CC lib/ftl/upgrade/ftl_sb_v3.o 00:12:31.076 CC lib/ftl/upgrade/ftl_sb_v5.o 00:12:31.076 LIB libspdk_vhost.a 00:12:31.076 CC lib/ftl/nvc/ftl_nvc_dev.o 00:12:31.076 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:12:31.076 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:12:31.076 SYMLINK libspdk_iscsi.so 00:12:31.076 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:12:31.076 SO libspdk_vhost.so.8.0 00:12:31.076 CC lib/ftl/base/ftl_base_dev.o 00:12:31.076 CC lib/ftl/base/ftl_base_bdev.o 00:12:31.076 CC lib/ftl/ftl_trace.o 00:12:31.076 SYMLINK libspdk_vhost.so 00:12:31.334 LIB libspdk_ftl.a 00:12:31.592 SO libspdk_ftl.so.9.0 00:12:32.159 SYMLINK libspdk_ftl.so 00:12:32.418 CC module/env_dpdk/env_dpdk_rpc.o 00:12:32.418 CC module/keyring/file/keyring.o 00:12:32.418 CC module/keyring/linux/keyring.o 00:12:32.418 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:12:32.418 CC module/accel/ioat/accel_ioat.o 00:12:32.418 CC module/sock/posix/posix.o 00:12:32.418 CC module/accel/error/accel_error.o 00:12:32.418 CC module/fsdev/aio/fsdev_aio.o 00:12:32.418 CC module/blob/bdev/blob_bdev.o 00:12:32.418 CC module/scheduler/dynamic/scheduler_dynamic.o 00:12:32.418 LIB libspdk_env_dpdk_rpc.a 00:12:32.676 SO libspdk_env_dpdk_rpc.so.6.0 00:12:32.676 LIB libspdk_scheduler_dpdk_governor.a 00:12:32.676 SYMLINK libspdk_env_dpdk_rpc.so 00:12:32.676 CC module/fsdev/aio/fsdev_aio_rpc.o 00:12:32.676 CC module/keyring/linux/keyring_rpc.o 00:12:32.676 CC module/keyring/file/keyring_rpc.o 00:12:32.676 SO libspdk_scheduler_dpdk_governor.so.4.0 00:12:32.676 CC module/accel/error/accel_error_rpc.o 00:12:32.676 SYMLINK libspdk_scheduler_dpdk_governor.so 00:12:32.676 CC module/fsdev/aio/linux_aio_mgr.o 00:12:32.676 CC module/accel/ioat/accel_ioat_rpc.o 00:12:32.676 LIB libspdk_scheduler_dynamic.a 00:12:32.676 SO libspdk_scheduler_dynamic.so.4.0 00:12:32.676 LIB libspdk_blob_bdev.a 00:12:32.676 LIB libspdk_keyring_linux.a 00:12:32.934 LIB libspdk_accel_error.a 00:12:32.934 LIB libspdk_keyring_file.a 00:12:32.934 SYMLINK libspdk_scheduler_dynamic.so 00:12:32.934 SO libspdk_blob_bdev.so.11.0 00:12:32.934 SO libspdk_keyring_file.so.2.0 00:12:32.934 SO libspdk_keyring_linux.so.1.0 00:12:32.934 SO libspdk_accel_error.so.2.0 00:12:32.934 LIB libspdk_accel_ioat.a 00:12:32.934 SYMLINK libspdk_keyring_linux.so 00:12:32.934 SYMLINK libspdk_accel_error.so 00:12:32.934 SYMLINK libspdk_blob_bdev.so 00:12:32.934 SYMLINK libspdk_keyring_file.so 00:12:32.934 SO libspdk_accel_ioat.so.6.0 00:12:32.934 SYMLINK libspdk_accel_ioat.so 00:12:32.934 CC module/scheduler/gscheduler/gscheduler.o 00:12:33.191 CC module/accel/dsa/accel_dsa.o 00:12:33.191 CC module/sock/uring/uring.o 00:12:33.191 CC module/accel/iaa/accel_iaa.o 00:12:33.191 LIB libspdk_fsdev_aio.a 00:12:33.191 SO libspdk_fsdev_aio.so.1.0 00:12:33.191 LIB libspdk_scheduler_gscheduler.a 00:12:33.191 CC module/bdev/delay/vbdev_delay.o 00:12:33.191 CC module/blobfs/bdev/blobfs_bdev.o 00:12:33.191 CC module/bdev/gpt/gpt.o 00:12:33.191 SO libspdk_scheduler_gscheduler.so.4.0 00:12:33.191 LIB libspdk_sock_posix.a 00:12:33.191 CC module/bdev/error/vbdev_error.o 00:12:33.191 SO libspdk_sock_posix.so.6.0 00:12:33.192 SYMLINK libspdk_fsdev_aio.so 00:12:33.192 SYMLINK libspdk_scheduler_gscheduler.so 00:12:33.192 CC module/accel/dsa/accel_dsa_rpc.o 00:12:33.450 SYMLINK libspdk_sock_posix.so 00:12:33.450 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:12:33.450 CC module/accel/iaa/accel_iaa_rpc.o 00:12:33.450 LIB libspdk_accel_dsa.a 00:12:33.450 CC module/bdev/lvol/vbdev_lvol.o 00:12:33.450 CC module/bdev/gpt/vbdev_gpt.o 00:12:33.450 SO libspdk_accel_dsa.so.5.0 00:12:33.450 CC module/bdev/malloc/bdev_malloc.o 00:12:33.450 CC module/bdev/error/vbdev_error_rpc.o 00:12:33.450 CC module/bdev/null/bdev_null.o 00:12:33.709 SYMLINK libspdk_accel_dsa.so 00:12:33.709 CC module/bdev/null/bdev_null_rpc.o 00:12:33.709 LIB libspdk_accel_iaa.a 00:12:33.709 CC module/bdev/delay/vbdev_delay_rpc.o 00:12:33.709 SO libspdk_accel_iaa.so.3.0 00:12:33.709 LIB libspdk_blobfs_bdev.a 00:12:33.709 SO libspdk_blobfs_bdev.so.6.0 00:12:33.709 SYMLINK libspdk_accel_iaa.so 00:12:33.709 CC module/bdev/malloc/bdev_malloc_rpc.o 00:12:33.709 LIB libspdk_bdev_error.a 00:12:33.709 SYMLINK libspdk_blobfs_bdev.so 00:12:33.709 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:12:33.709 SO libspdk_bdev_error.so.6.0 00:12:33.709 LIB libspdk_bdev_gpt.a 00:12:33.709 LIB libspdk_bdev_delay.a 00:12:33.709 SO libspdk_bdev_gpt.so.6.0 00:12:33.709 LIB libspdk_bdev_null.a 00:12:33.709 SYMLINK libspdk_bdev_error.so 00:12:33.966 SO libspdk_bdev_delay.so.6.0 00:12:33.966 LIB libspdk_sock_uring.a 00:12:33.966 SO libspdk_bdev_null.so.6.0 00:12:33.966 SYMLINK libspdk_bdev_gpt.so 00:12:33.966 SO libspdk_sock_uring.so.5.0 00:12:33.966 SYMLINK libspdk_bdev_delay.so 00:12:33.966 SYMLINK libspdk_bdev_null.so 00:12:33.966 LIB libspdk_bdev_malloc.a 00:12:33.966 SYMLINK libspdk_sock_uring.so 00:12:33.966 CC module/bdev/nvme/bdev_nvme.o 00:12:33.966 SO libspdk_bdev_malloc.so.6.0 00:12:33.966 CC module/bdev/nvme/bdev_nvme_rpc.o 00:12:33.966 CC module/bdev/passthru/vbdev_passthru.o 00:12:33.966 SYMLINK libspdk_bdev_malloc.so 00:12:33.966 CC module/bdev/raid/bdev_raid.o 00:12:34.223 CC module/bdev/split/vbdev_split.o 00:12:34.224 CC module/bdev/uring/bdev_uring.o 00:12:34.224 CC module/bdev/zone_block/vbdev_zone_block.o 00:12:34.224 LIB libspdk_bdev_lvol.a 00:12:34.224 CC module/bdev/aio/bdev_aio.o 00:12:34.224 SO libspdk_bdev_lvol.so.6.0 00:12:34.224 CC module/bdev/ftl/bdev_ftl.o 00:12:34.224 SYMLINK libspdk_bdev_lvol.so 00:12:34.224 CC module/bdev/aio/bdev_aio_rpc.o 00:12:34.482 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:12:34.482 CC module/bdev/split/vbdev_split_rpc.o 00:12:34.482 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:12:34.482 CC module/bdev/nvme/nvme_rpc.o 00:12:34.482 CC module/bdev/uring/bdev_uring_rpc.o 00:12:34.482 LIB libspdk_bdev_aio.a 00:12:34.482 CC module/bdev/ftl/bdev_ftl_rpc.o 00:12:34.482 SO libspdk_bdev_aio.so.6.0 00:12:34.482 LIB libspdk_bdev_passthru.a 00:12:34.482 LIB libspdk_bdev_split.a 00:12:34.482 SO libspdk_bdev_passthru.so.6.0 00:12:34.482 SO libspdk_bdev_split.so.6.0 00:12:34.739 SYMLINK libspdk_bdev_aio.so 00:12:34.740 LIB libspdk_bdev_zone_block.a 00:12:34.740 CC module/bdev/nvme/bdev_mdns_client.o 00:12:34.740 SYMLINK libspdk_bdev_split.so 00:12:34.740 SO libspdk_bdev_zone_block.so.6.0 00:12:34.740 SYMLINK libspdk_bdev_passthru.so 00:12:34.740 CC module/bdev/nvme/vbdev_opal.o 00:12:34.740 CC module/bdev/raid/bdev_raid_rpc.o 00:12:34.740 LIB libspdk_bdev_uring.a 00:12:34.740 SYMLINK libspdk_bdev_zone_block.so 00:12:34.740 SO libspdk_bdev_uring.so.6.0 00:12:34.740 LIB libspdk_bdev_ftl.a 00:12:34.740 CC module/bdev/raid/bdev_raid_sb.o 00:12:34.740 SYMLINK libspdk_bdev_uring.so 00:12:34.740 CC module/bdev/raid/raid0.o 00:12:34.999 SO libspdk_bdev_ftl.so.6.0 00:12:34.999 CC module/bdev/nvme/vbdev_opal_rpc.o 00:12:34.999 CC module/bdev/raid/raid1.o 00:12:34.999 CC module/bdev/iscsi/bdev_iscsi.o 00:12:34.999 SYMLINK libspdk_bdev_ftl.so 00:12:34.999 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:12:34.999 CC module/bdev/raid/concat.o 00:12:34.999 CC module/bdev/virtio/bdev_virtio_scsi.o 00:12:35.260 CC module/bdev/virtio/bdev_virtio_blk.o 00:12:35.260 CC module/bdev/virtio/bdev_virtio_rpc.o 00:12:35.260 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:12:35.260 LIB libspdk_bdev_raid.a 00:12:35.260 SO libspdk_bdev_raid.so.6.0 00:12:35.260 LIB libspdk_bdev_iscsi.a 00:12:35.519 SYMLINK libspdk_bdev_raid.so 00:12:35.519 SO libspdk_bdev_iscsi.so.6.0 00:12:35.519 SYMLINK libspdk_bdev_iscsi.so 00:12:35.519 LIB libspdk_bdev_virtio.a 00:12:35.519 SO libspdk_bdev_virtio.so.6.0 00:12:35.777 SYMLINK libspdk_bdev_virtio.so 00:12:36.344 LIB libspdk_bdev_nvme.a 00:12:36.603 SO libspdk_bdev_nvme.so.7.0 00:12:36.603 SYMLINK libspdk_bdev_nvme.so 00:12:37.170 CC module/event/subsystems/sock/sock.o 00:12:37.170 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:12:37.170 CC module/event/subsystems/scheduler/scheduler.o 00:12:37.170 CC module/event/subsystems/fsdev/fsdev.o 00:12:37.170 CC module/event/subsystems/iobuf/iobuf.o 00:12:37.170 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:12:37.170 CC module/event/subsystems/vmd/vmd.o 00:12:37.170 CC module/event/subsystems/vmd/vmd_rpc.o 00:12:37.170 CC module/event/subsystems/keyring/keyring.o 00:12:37.170 LIB libspdk_event_vhost_blk.a 00:12:37.170 LIB libspdk_event_vmd.a 00:12:37.170 LIB libspdk_event_sock.a 00:12:37.170 LIB libspdk_event_scheduler.a 00:12:37.170 LIB libspdk_event_keyring.a 00:12:37.170 LIB libspdk_event_fsdev.a 00:12:37.170 SO libspdk_event_vhost_blk.so.3.0 00:12:37.170 SO libspdk_event_sock.so.5.0 00:12:37.170 SO libspdk_event_vmd.so.6.0 00:12:37.170 LIB libspdk_event_iobuf.a 00:12:37.170 SO libspdk_event_scheduler.so.4.0 00:12:37.171 SO libspdk_event_keyring.so.1.0 00:12:37.171 SO libspdk_event_fsdev.so.1.0 00:12:37.428 SO libspdk_event_iobuf.so.3.0 00:12:37.429 SYMLINK libspdk_event_vhost_blk.so 00:12:37.429 SYMLINK libspdk_event_sock.so 00:12:37.429 SYMLINK libspdk_event_vmd.so 00:12:37.429 SYMLINK libspdk_event_fsdev.so 00:12:37.429 SYMLINK libspdk_event_scheduler.so 00:12:37.429 SYMLINK libspdk_event_keyring.so 00:12:37.429 SYMLINK libspdk_event_iobuf.so 00:12:37.686 CC module/event/subsystems/accel/accel.o 00:12:37.945 LIB libspdk_event_accel.a 00:12:37.945 SO libspdk_event_accel.so.6.0 00:12:37.945 SYMLINK libspdk_event_accel.so 00:12:38.202 CC module/event/subsystems/bdev/bdev.o 00:12:38.459 LIB libspdk_event_bdev.a 00:12:38.459 SO libspdk_event_bdev.so.6.0 00:12:38.459 SYMLINK libspdk_event_bdev.so 00:12:38.717 CC module/event/subsystems/scsi/scsi.o 00:12:38.717 CC module/event/subsystems/ublk/ublk.o 00:12:38.717 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:12:38.717 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:12:38.717 CC module/event/subsystems/nbd/nbd.o 00:12:38.975 LIB libspdk_event_scsi.a 00:12:38.975 LIB libspdk_event_nbd.a 00:12:38.975 LIB libspdk_event_ublk.a 00:12:38.975 SO libspdk_event_scsi.so.6.0 00:12:38.975 SO libspdk_event_nbd.so.6.0 00:12:38.975 SO libspdk_event_ublk.so.3.0 00:12:38.975 SYMLINK libspdk_event_ublk.so 00:12:38.975 SYMLINK libspdk_event_nbd.so 00:12:38.975 SYMLINK libspdk_event_scsi.so 00:12:38.975 LIB libspdk_event_nvmf.a 00:12:39.232 SO libspdk_event_nvmf.so.6.0 00:12:39.232 SYMLINK libspdk_event_nvmf.so 00:12:39.232 CC module/event/subsystems/iscsi/iscsi.o 00:12:39.232 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:12:39.490 LIB libspdk_event_iscsi.a 00:12:39.490 LIB libspdk_event_vhost_scsi.a 00:12:39.490 SO libspdk_event_iscsi.so.6.0 00:12:39.490 SO libspdk_event_vhost_scsi.so.3.0 00:12:39.490 SYMLINK libspdk_event_vhost_scsi.so 00:12:39.490 SYMLINK libspdk_event_iscsi.so 00:12:39.748 SO libspdk.so.6.0 00:12:39.748 SYMLINK libspdk.so 00:12:40.038 CXX app/trace/trace.o 00:12:40.038 CC app/trace_record/trace_record.o 00:12:40.038 CC app/iscsi_tgt/iscsi_tgt.o 00:12:40.038 CC examples/interrupt_tgt/interrupt_tgt.o 00:12:40.038 CC app/nvmf_tgt/nvmf_main.o 00:12:40.038 CC examples/util/zipf/zipf.o 00:12:40.038 CC test/app/bdev_svc/bdev_svc.o 00:12:40.296 CC examples/ioat/perf/perf.o 00:12:40.296 CC test/thread/poller_perf/poller_perf.o 00:12:40.296 CC test/dma/test_dma/test_dma.o 00:12:40.296 LINK spdk_trace_record 00:12:40.296 LINK interrupt_tgt 00:12:40.296 LINK nvmf_tgt 00:12:40.296 LINK zipf 00:12:40.296 LINK iscsi_tgt 00:12:40.296 LINK bdev_svc 00:12:40.296 LINK poller_perf 00:12:40.296 LINK ioat_perf 00:12:40.555 LINK spdk_trace 00:12:40.555 CC examples/ioat/verify/verify.o 00:12:40.555 CC app/spdk_tgt/spdk_tgt.o 00:12:40.555 CC app/spdk_lspci/spdk_lspci.o 00:12:40.555 TEST_HEADER include/spdk/accel.h 00:12:40.817 TEST_HEADER include/spdk/accel_module.h 00:12:40.817 TEST_HEADER include/spdk/assert.h 00:12:40.817 TEST_HEADER include/spdk/barrier.h 00:12:40.817 TEST_HEADER include/spdk/base64.h 00:12:40.817 TEST_HEADER include/spdk/bdev.h 00:12:40.817 TEST_HEADER include/spdk/bdev_module.h 00:12:40.817 TEST_HEADER include/spdk/bdev_zone.h 00:12:40.817 TEST_HEADER include/spdk/bit_array.h 00:12:40.817 TEST_HEADER include/spdk/bit_pool.h 00:12:40.817 TEST_HEADER include/spdk/blob_bdev.h 00:12:40.817 TEST_HEADER include/spdk/blobfs_bdev.h 00:12:40.817 TEST_HEADER include/spdk/blobfs.h 00:12:40.817 TEST_HEADER include/spdk/blob.h 00:12:40.817 CC app/spdk_nvme_perf/perf.o 00:12:40.817 TEST_HEADER include/spdk/conf.h 00:12:40.817 TEST_HEADER include/spdk/config.h 00:12:40.817 TEST_HEADER include/spdk/cpuset.h 00:12:40.817 TEST_HEADER include/spdk/crc16.h 00:12:40.817 CC app/spdk_nvme_identify/identify.o 00:12:40.817 TEST_HEADER include/spdk/crc32.h 00:12:40.817 TEST_HEADER include/spdk/crc64.h 00:12:40.817 TEST_HEADER include/spdk/dif.h 00:12:40.817 TEST_HEADER include/spdk/dma.h 00:12:40.817 TEST_HEADER include/spdk/endian.h 00:12:40.817 TEST_HEADER include/spdk/env_dpdk.h 00:12:40.817 TEST_HEADER include/spdk/env.h 00:12:40.817 TEST_HEADER include/spdk/event.h 00:12:40.817 TEST_HEADER include/spdk/fd_group.h 00:12:40.817 TEST_HEADER include/spdk/fd.h 00:12:40.817 TEST_HEADER include/spdk/file.h 00:12:40.817 TEST_HEADER include/spdk/fsdev.h 00:12:40.817 TEST_HEADER include/spdk/fsdev_module.h 00:12:40.817 TEST_HEADER include/spdk/ftl.h 00:12:40.817 TEST_HEADER include/spdk/fuse_dispatcher.h 00:12:40.817 TEST_HEADER include/spdk/gpt_spec.h 00:12:40.817 TEST_HEADER include/spdk/hexlify.h 00:12:40.817 TEST_HEADER include/spdk/histogram_data.h 00:12:40.817 TEST_HEADER include/spdk/idxd.h 00:12:40.817 TEST_HEADER include/spdk/idxd_spec.h 00:12:40.817 TEST_HEADER include/spdk/init.h 00:12:40.817 TEST_HEADER include/spdk/ioat.h 00:12:40.817 TEST_HEADER include/spdk/ioat_spec.h 00:12:40.817 TEST_HEADER include/spdk/iscsi_spec.h 00:12:40.817 TEST_HEADER include/spdk/json.h 00:12:40.817 TEST_HEADER include/spdk/jsonrpc.h 00:12:40.817 TEST_HEADER include/spdk/keyring.h 00:12:40.817 TEST_HEADER include/spdk/keyring_module.h 00:12:40.817 TEST_HEADER include/spdk/likely.h 00:12:40.817 TEST_HEADER include/spdk/log.h 00:12:40.817 TEST_HEADER include/spdk/lvol.h 00:12:40.817 TEST_HEADER include/spdk/md5.h 00:12:40.817 TEST_HEADER include/spdk/memory.h 00:12:40.817 CC examples/thread/thread/thread_ex.o 00:12:40.817 TEST_HEADER include/spdk/mmio.h 00:12:40.817 TEST_HEADER include/spdk/nbd.h 00:12:40.817 TEST_HEADER include/spdk/net.h 00:12:40.817 TEST_HEADER include/spdk/notify.h 00:12:40.817 LINK test_dma 00:12:40.817 TEST_HEADER include/spdk/nvme.h 00:12:40.817 TEST_HEADER include/spdk/nvme_intel.h 00:12:40.817 TEST_HEADER include/spdk/nvme_ocssd.h 00:12:40.817 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:12:40.817 TEST_HEADER include/spdk/nvme_spec.h 00:12:40.817 LINK spdk_lspci 00:12:40.817 TEST_HEADER include/spdk/nvme_zns.h 00:12:40.817 TEST_HEADER include/spdk/nvmf_cmd.h 00:12:40.817 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:12:40.817 TEST_HEADER include/spdk/nvmf.h 00:12:40.817 TEST_HEADER include/spdk/nvmf_spec.h 00:12:40.817 TEST_HEADER include/spdk/nvmf_transport.h 00:12:40.817 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:12:40.817 CC app/spdk_nvme_discover/discovery_aer.o 00:12:40.817 TEST_HEADER include/spdk/opal.h 00:12:40.817 TEST_HEADER include/spdk/opal_spec.h 00:12:40.817 TEST_HEADER include/spdk/pci_ids.h 00:12:40.817 TEST_HEADER include/spdk/pipe.h 00:12:40.817 TEST_HEADER include/spdk/queue.h 00:12:40.817 TEST_HEADER include/spdk/reduce.h 00:12:40.817 TEST_HEADER include/spdk/rpc.h 00:12:40.817 TEST_HEADER include/spdk/scheduler.h 00:12:40.817 TEST_HEADER include/spdk/scsi.h 00:12:40.817 TEST_HEADER include/spdk/scsi_spec.h 00:12:40.817 TEST_HEADER include/spdk/sock.h 00:12:40.817 TEST_HEADER include/spdk/stdinc.h 00:12:40.817 TEST_HEADER include/spdk/string.h 00:12:40.817 TEST_HEADER include/spdk/thread.h 00:12:40.817 TEST_HEADER include/spdk/trace.h 00:12:40.817 TEST_HEADER include/spdk/trace_parser.h 00:12:40.817 TEST_HEADER include/spdk/tree.h 00:12:40.817 TEST_HEADER include/spdk/ublk.h 00:12:40.817 TEST_HEADER include/spdk/util.h 00:12:40.817 TEST_HEADER include/spdk/uuid.h 00:12:40.817 TEST_HEADER include/spdk/version.h 00:12:40.817 TEST_HEADER include/spdk/vfio_user_pci.h 00:12:40.817 TEST_HEADER include/spdk/vfio_user_spec.h 00:12:40.817 TEST_HEADER include/spdk/vhost.h 00:12:40.817 TEST_HEADER include/spdk/vmd.h 00:12:40.817 TEST_HEADER include/spdk/xor.h 00:12:40.817 LINK verify 00:12:40.817 TEST_HEADER include/spdk/zipf.h 00:12:40.817 CXX test/cpp_headers/accel.o 00:12:40.817 LINK spdk_tgt 00:12:41.076 LINK spdk_nvme_discover 00:12:41.076 CXX test/cpp_headers/accel_module.o 00:12:41.076 LINK thread 00:12:41.076 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:12:41.076 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:12:41.076 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:12:41.076 CC app/spdk_top/spdk_top.o 00:12:41.335 LINK nvme_fuzz 00:12:41.335 CXX test/cpp_headers/assert.o 00:12:41.335 CXX test/cpp_headers/barrier.o 00:12:41.335 CXX test/cpp_headers/base64.o 00:12:41.335 CC examples/sock/hello_world/hello_sock.o 00:12:41.335 CXX test/cpp_headers/bdev.o 00:12:41.592 CXX test/cpp_headers/bdev_module.o 00:12:41.592 LINK vhost_fuzz 00:12:41.592 LINK spdk_nvme_identify 00:12:41.592 LINK spdk_nvme_perf 00:12:41.850 LINK hello_sock 00:12:41.850 CC examples/vmd/lsvmd/lsvmd.o 00:12:41.850 CXX test/cpp_headers/bdev_zone.o 00:12:41.850 CC examples/vmd/led/led.o 00:12:41.850 CXX test/cpp_headers/bit_array.o 00:12:41.850 CC test/env/mem_callbacks/mem_callbacks.o 00:12:41.850 CXX test/cpp_headers/bit_pool.o 00:12:41.850 LINK lsvmd 00:12:41.850 LINK led 00:12:42.107 CC test/app/histogram_perf/histogram_perf.o 00:12:42.107 CXX test/cpp_headers/blob_bdev.o 00:12:42.107 CC examples/idxd/perf/perf.o 00:12:42.107 LINK spdk_top 00:12:42.107 CC app/vhost/vhost.o 00:12:42.107 CC test/event/event_perf/event_perf.o 00:12:42.107 LINK histogram_perf 00:12:42.107 CC test/event/reactor/reactor.o 00:12:42.367 CXX test/cpp_headers/blobfs_bdev.o 00:12:42.367 CC test/event/reactor_perf/reactor_perf.o 00:12:42.367 CXX test/cpp_headers/blobfs.o 00:12:42.367 LINK vhost 00:12:42.367 LINK event_perf 00:12:42.367 LINK reactor 00:12:42.367 LINK idxd_perf 00:12:42.367 LINK reactor_perf 00:12:42.367 LINK mem_callbacks 00:12:42.367 CXX test/cpp_headers/blob.o 00:12:42.637 CC test/rpc_client/rpc_client_test.o 00:12:42.637 CC test/nvme/aer/aer.o 00:12:42.637 CC test/nvme/reset/reset.o 00:12:42.637 CC test/nvme/sgl/sgl.o 00:12:42.637 CXX test/cpp_headers/conf.o 00:12:42.637 CC test/env/vtophys/vtophys.o 00:12:42.637 CC test/event/app_repeat/app_repeat.o 00:12:42.895 CC app/spdk_dd/spdk_dd.o 00:12:42.895 LINK rpc_client_test 00:12:42.895 CC examples/fsdev/hello_world/hello_fsdev.o 00:12:42.895 LINK iscsi_fuzz 00:12:42.895 LINK vtophys 00:12:42.895 LINK reset 00:12:42.895 CXX test/cpp_headers/config.o 00:12:42.896 LINK app_repeat 00:12:42.896 CXX test/cpp_headers/cpuset.o 00:12:42.896 LINK sgl 00:12:42.896 LINK aer 00:12:43.154 CC test/app/jsoncat/jsoncat.o 00:12:43.154 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:12:43.154 CXX test/cpp_headers/crc16.o 00:12:43.154 LINK hello_fsdev 00:12:43.154 CC test/nvme/e2edp/nvme_dp.o 00:12:43.154 LINK jsoncat 00:12:43.154 LINK spdk_dd 00:12:43.412 CC test/env/memory/memory_ut.o 00:12:43.412 CC test/event/scheduler/scheduler.o 00:12:43.412 CC test/app/stub/stub.o 00:12:43.412 CC test/env/pci/pci_ut.o 00:12:43.412 LINK env_dpdk_post_init 00:12:43.412 CXX test/cpp_headers/crc32.o 00:12:43.412 LINK stub 00:12:43.671 LINK nvme_dp 00:12:43.671 LINK scheduler 00:12:43.671 CXX test/cpp_headers/crc64.o 00:12:43.671 CXX test/cpp_headers/dif.o 00:12:43.671 CC examples/accel/perf/accel_perf.o 00:12:43.671 CC app/fio/nvme/fio_plugin.o 00:12:43.671 CC app/fio/bdev/fio_plugin.o 00:12:43.671 CXX test/cpp_headers/dma.o 00:12:43.671 LINK pci_ut 00:12:43.930 CXX test/cpp_headers/endian.o 00:12:43.930 CXX test/cpp_headers/env_dpdk.o 00:12:43.930 CC test/nvme/overhead/overhead.o 00:12:43.930 CC test/accel/dif/dif.o 00:12:43.930 CXX test/cpp_headers/env.o 00:12:43.930 CXX test/cpp_headers/event.o 00:12:43.930 CXX test/cpp_headers/fd_group.o 00:12:44.188 CC test/nvme/err_injection/err_injection.o 00:12:44.188 CXX test/cpp_headers/fd.o 00:12:44.188 LINK overhead 00:12:44.188 LINK accel_perf 00:12:44.188 CXX test/cpp_headers/file.o 00:12:44.188 CC test/nvme/startup/startup.o 00:12:44.188 LINK spdk_nvme 00:12:44.454 LINK err_injection 00:12:44.454 CXX test/cpp_headers/fsdev.o 00:12:44.454 LINK spdk_bdev 00:12:44.454 CXX test/cpp_headers/fsdev_module.o 00:12:44.454 CXX test/cpp_headers/ftl.o 00:12:44.454 CXX test/cpp_headers/fuse_dispatcher.o 00:12:44.454 LINK startup 00:12:44.454 CXX test/cpp_headers/gpt_spec.o 00:12:44.713 CXX test/cpp_headers/hexlify.o 00:12:44.713 CXX test/cpp_headers/histogram_data.o 00:12:44.713 LINK dif 00:12:44.713 CC examples/blob/hello_world/hello_blob.o 00:12:44.713 CC test/blobfs/mkfs/mkfs.o 00:12:44.713 LINK memory_ut 00:12:44.713 CXX test/cpp_headers/idxd.o 00:12:44.713 CXX test/cpp_headers/idxd_spec.o 00:12:44.713 CXX test/cpp_headers/init.o 00:12:44.713 CC test/nvme/reserve/reserve.o 00:12:44.973 CC test/nvme/simple_copy/simple_copy.o 00:12:44.973 LINK mkfs 00:12:44.973 CC test/nvme/connect_stress/connect_stress.o 00:12:44.973 LINK hello_blob 00:12:44.973 CC test/lvol/esnap/esnap.o 00:12:44.973 CXX test/cpp_headers/ioat.o 00:12:44.973 CC test/nvme/boot_partition/boot_partition.o 00:12:44.973 CC test/nvme/compliance/nvme_compliance.o 00:12:44.973 LINK reserve 00:12:44.973 CC test/nvme/fused_ordering/fused_ordering.o 00:12:45.233 LINK connect_stress 00:12:45.233 CXX test/cpp_headers/ioat_spec.o 00:12:45.233 LINK simple_copy 00:12:45.233 LINK boot_partition 00:12:45.233 CXX test/cpp_headers/iscsi_spec.o 00:12:45.233 LINK fused_ordering 00:12:45.233 CXX test/cpp_headers/json.o 00:12:45.492 CC examples/blob/cli/blobcli.o 00:12:45.492 LINK nvme_compliance 00:12:45.492 CC test/bdev/bdevio/bdevio.o 00:12:45.492 CC test/nvme/doorbell_aers/doorbell_aers.o 00:12:45.492 CC test/nvme/fdp/fdp.o 00:12:45.492 CC test/nvme/cuse/cuse.o 00:12:45.492 CXX test/cpp_headers/jsonrpc.o 00:12:45.750 CC examples/nvme/hello_world/hello_world.o 00:12:45.750 LINK doorbell_aers 00:12:45.750 CXX test/cpp_headers/keyring.o 00:12:45.750 CC examples/nvme/reconnect/reconnect.o 00:12:45.750 CC examples/bdev/hello_world/hello_bdev.o 00:12:45.750 LINK fdp 00:12:46.009 LINK blobcli 00:12:46.009 CXX test/cpp_headers/keyring_module.o 00:12:46.009 LINK bdevio 00:12:46.009 LINK hello_world 00:12:46.009 LINK hello_bdev 00:12:46.009 CXX test/cpp_headers/likely.o 00:12:46.009 CC examples/bdev/bdevperf/bdevperf.o 00:12:46.267 LINK reconnect 00:12:46.267 CC examples/nvme/arbitration/arbitration.o 00:12:46.267 CC examples/nvme/nvme_manage/nvme_manage.o 00:12:46.267 CC examples/nvme/hotplug/hotplug.o 00:12:46.267 CC examples/nvme/cmb_copy/cmb_copy.o 00:12:46.267 CXX test/cpp_headers/log.o 00:12:46.267 CC examples/nvme/abort/abort.o 00:12:46.525 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:12:46.525 LINK cmb_copy 00:12:46.525 CXX test/cpp_headers/lvol.o 00:12:46.525 LINK hotplug 00:12:46.525 LINK arbitration 00:12:46.783 CXX test/cpp_headers/md5.o 00:12:46.783 LINK pmr_persistence 00:12:46.783 CXX test/cpp_headers/memory.o 00:12:46.783 LINK nvme_manage 00:12:46.783 CXX test/cpp_headers/mmio.o 00:12:46.783 CXX test/cpp_headers/nbd.o 00:12:46.783 CXX test/cpp_headers/net.o 00:12:46.783 LINK abort 00:12:46.783 CXX test/cpp_headers/notify.o 00:12:46.783 CXX test/cpp_headers/nvme.o 00:12:46.783 CXX test/cpp_headers/nvme_intel.o 00:12:47.042 CXX test/cpp_headers/nvme_ocssd.o 00:12:47.042 CXX test/cpp_headers/nvme_ocssd_spec.o 00:12:47.042 LINK cuse 00:12:47.042 CXX test/cpp_headers/nvme_spec.o 00:12:47.042 CXX test/cpp_headers/nvme_zns.o 00:12:47.042 CXX test/cpp_headers/nvmf_cmd.o 00:12:47.042 LINK bdevperf 00:12:47.042 CXX test/cpp_headers/nvmf_fc_spec.o 00:12:47.042 CXX test/cpp_headers/nvmf.o 00:12:47.042 CXX test/cpp_headers/nvmf_spec.o 00:12:47.042 CXX test/cpp_headers/nvmf_transport.o 00:12:47.300 CXX test/cpp_headers/opal.o 00:12:47.300 CXX test/cpp_headers/opal_spec.o 00:12:47.300 CXX test/cpp_headers/pci_ids.o 00:12:47.300 CXX test/cpp_headers/pipe.o 00:12:47.300 CXX test/cpp_headers/queue.o 00:12:47.300 CXX test/cpp_headers/reduce.o 00:12:47.300 CXX test/cpp_headers/rpc.o 00:12:47.300 CXX test/cpp_headers/scheduler.o 00:12:47.300 CXX test/cpp_headers/scsi.o 00:12:47.300 CXX test/cpp_headers/scsi_spec.o 00:12:47.300 CXX test/cpp_headers/sock.o 00:12:47.300 CXX test/cpp_headers/stdinc.o 00:12:47.300 CXX test/cpp_headers/string.o 00:12:47.558 CXX test/cpp_headers/thread.o 00:12:47.558 CC examples/nvmf/nvmf/nvmf.o 00:12:47.558 CXX test/cpp_headers/trace.o 00:12:47.558 CXX test/cpp_headers/trace_parser.o 00:12:47.558 CXX test/cpp_headers/tree.o 00:12:47.558 CXX test/cpp_headers/ublk.o 00:12:47.558 CXX test/cpp_headers/util.o 00:12:47.558 CXX test/cpp_headers/uuid.o 00:12:47.558 CXX test/cpp_headers/version.o 00:12:47.558 CXX test/cpp_headers/vfio_user_pci.o 00:12:47.558 CXX test/cpp_headers/vfio_user_spec.o 00:12:47.558 CXX test/cpp_headers/vhost.o 00:12:47.816 CXX test/cpp_headers/vmd.o 00:12:47.816 CXX test/cpp_headers/xor.o 00:12:47.816 CXX test/cpp_headers/zipf.o 00:12:47.816 LINK nvmf 00:12:51.097 LINK esnap 00:12:51.097 00:12:51.097 real 1m36.904s 00:12:51.097 user 8m44.688s 00:12:51.097 sys 1m51.326s 00:12:51.097 ************************************ 00:12:51.097 END TEST make 00:12:51.097 ************************************ 00:12:51.097 19:15:00 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:12:51.097 19:15:00 make -- common/autotest_common.sh@10 -- $ set +x 00:12:51.097 19:15:00 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:12:51.097 19:15:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:12:51.097 19:15:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:12:51.097 19:15:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:51.097 19:15:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:12:51.097 19:15:00 -- pm/common@44 -- $ pid=5289 00:12:51.097 19:15:00 -- pm/common@50 -- $ kill -TERM 5289 00:12:51.097 19:15:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:51.097 19:15:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:12:51.097 19:15:00 -- pm/common@44 -- $ pid=5291 00:12:51.097 19:15:00 -- pm/common@50 -- $ kill -TERM 5291 00:12:51.097 19:15:00 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:51.097 19:15:00 -- common/autotest_common.sh@1691 -- # lcov --version 00:12:51.097 19:15:00 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:51.355 19:15:00 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:51.355 19:15:00 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:51.355 19:15:00 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:51.355 19:15:00 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:51.355 19:15:00 -- scripts/common.sh@336 -- # IFS=.-: 00:12:51.355 19:15:00 -- scripts/common.sh@336 -- # read -ra ver1 00:12:51.355 19:15:00 -- scripts/common.sh@337 -- # IFS=.-: 00:12:51.355 19:15:00 -- scripts/common.sh@337 -- # read -ra ver2 00:12:51.355 19:15:00 -- scripts/common.sh@338 -- # local 'op=<' 00:12:51.355 19:15:00 -- scripts/common.sh@340 -- # ver1_l=2 00:12:51.355 19:15:00 -- scripts/common.sh@341 -- # ver2_l=1 00:12:51.355 19:15:00 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:51.355 19:15:00 -- scripts/common.sh@344 -- # case "$op" in 00:12:51.355 19:15:00 -- scripts/common.sh@345 -- # : 1 00:12:51.355 19:15:00 -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:51.355 19:15:00 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:51.355 19:15:00 -- scripts/common.sh@365 -- # decimal 1 00:12:51.355 19:15:00 -- scripts/common.sh@353 -- # local d=1 00:12:51.355 19:15:00 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:51.355 19:15:00 -- scripts/common.sh@355 -- # echo 1 00:12:51.355 19:15:00 -- scripts/common.sh@365 -- # ver1[v]=1 00:12:51.355 19:15:00 -- scripts/common.sh@366 -- # decimal 2 00:12:51.355 19:15:00 -- scripts/common.sh@353 -- # local d=2 00:12:51.355 19:15:00 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:51.355 19:15:00 -- scripts/common.sh@355 -- # echo 2 00:12:51.355 19:15:00 -- scripts/common.sh@366 -- # ver2[v]=2 00:12:51.355 19:15:00 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:51.355 19:15:00 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:51.355 19:15:00 -- scripts/common.sh@368 -- # return 0 00:12:51.355 19:15:00 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:51.355 19:15:00 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:51.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.355 --rc genhtml_branch_coverage=1 00:12:51.355 --rc genhtml_function_coverage=1 00:12:51.355 --rc genhtml_legend=1 00:12:51.355 --rc geninfo_all_blocks=1 00:12:51.355 --rc geninfo_unexecuted_blocks=1 00:12:51.355 00:12:51.355 ' 00:12:51.355 19:15:00 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:51.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.355 --rc genhtml_branch_coverage=1 00:12:51.355 --rc genhtml_function_coverage=1 00:12:51.356 --rc genhtml_legend=1 00:12:51.356 --rc geninfo_all_blocks=1 00:12:51.356 --rc geninfo_unexecuted_blocks=1 00:12:51.356 00:12:51.356 ' 00:12:51.356 19:15:00 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:51.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.356 --rc genhtml_branch_coverage=1 00:12:51.356 --rc genhtml_function_coverage=1 00:12:51.356 --rc genhtml_legend=1 00:12:51.356 --rc geninfo_all_blocks=1 00:12:51.356 --rc geninfo_unexecuted_blocks=1 00:12:51.356 00:12:51.356 ' 00:12:51.356 19:15:00 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:51.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.356 --rc genhtml_branch_coverage=1 00:12:51.356 --rc genhtml_function_coverage=1 00:12:51.356 --rc genhtml_legend=1 00:12:51.356 --rc geninfo_all_blocks=1 00:12:51.356 --rc geninfo_unexecuted_blocks=1 00:12:51.356 00:12:51.356 ' 00:12:51.356 19:15:00 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:51.356 19:15:00 -- nvmf/common.sh@7 -- # uname -s 00:12:51.356 19:15:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:51.356 19:15:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:51.356 19:15:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:51.356 19:15:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:51.356 19:15:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:51.356 19:15:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:51.356 19:15:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:51.356 19:15:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:51.356 19:15:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:51.356 19:15:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:51.356 19:15:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:12:51.356 19:15:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:12:51.356 19:15:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:51.356 19:15:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:51.356 19:15:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:51.356 19:15:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:51.356 19:15:00 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:51.356 19:15:00 -- scripts/common.sh@15 -- # shopt -s extglob 00:12:51.356 19:15:00 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.356 19:15:00 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.356 19:15:00 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.356 19:15:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.356 19:15:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.356 19:15:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.356 19:15:00 -- paths/export.sh@5 -- # export PATH 00:12:51.356 19:15:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.356 19:15:00 -- nvmf/common.sh@51 -- # : 0 00:12:51.356 19:15:00 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:51.356 19:15:00 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:51.356 19:15:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:51.356 19:15:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:51.356 19:15:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:51.356 19:15:00 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:51.356 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:51.356 19:15:00 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:51.356 19:15:00 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:51.356 19:15:00 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:51.356 19:15:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:12:51.356 19:15:00 -- spdk/autotest.sh@32 -- # uname -s 00:12:51.356 19:15:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:12:51.356 19:15:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:12:51.356 19:15:00 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:12:51.356 19:15:00 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:12:51.356 19:15:00 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:12:51.356 19:15:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:12:51.356 19:15:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:12:51.356 19:15:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:12:51.356 19:15:00 -- spdk/autotest.sh@48 -- # udevadm_pid=54462 00:12:51.356 19:15:00 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:12:51.356 19:15:00 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:12:51.356 19:15:00 -- pm/common@17 -- # local monitor 00:12:51.356 19:15:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:12:51.356 19:15:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:12:51.356 19:15:00 -- pm/common@21 -- # date +%s 00:12:51.356 19:15:00 -- pm/common@25 -- # sleep 1 00:12:51.356 19:15:00 -- pm/common@21 -- # date +%s 00:12:51.356 19:15:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1729192500 00:12:51.356 19:15:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1729192500 00:12:51.356 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1729192500_collect-cpu-load.pm.log 00:12:51.356 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1729192500_collect-vmstat.pm.log 00:12:52.290 19:15:01 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:12:52.290 19:15:01 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:12:52.290 19:15:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:52.290 19:15:01 -- common/autotest_common.sh@10 -- # set +x 00:12:52.290 19:15:01 -- spdk/autotest.sh@59 -- # create_test_list 00:12:52.290 19:15:01 -- common/autotest_common.sh@748 -- # xtrace_disable 00:12:52.290 19:15:01 -- common/autotest_common.sh@10 -- # set +x 00:12:52.290 19:15:01 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:12:52.290 19:15:01 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:12:52.290 19:15:01 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:12:52.290 19:15:01 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:12:52.290 19:15:01 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:12:52.290 19:15:01 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:12:52.290 19:15:01 -- common/autotest_common.sh@1455 -- # uname 00:12:52.290 19:15:01 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:12:52.290 19:15:01 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:12:52.290 19:15:01 -- common/autotest_common.sh@1475 -- # uname 00:12:52.290 19:15:01 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:12:52.290 19:15:01 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:12:52.290 19:15:01 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:12:52.548 lcov: LCOV version 1.15 00:12:52.548 19:15:01 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:13:10.629 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:13:10.629 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:13:28.766 19:15:36 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:13:28.766 19:15:36 -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:28.766 19:15:36 -- common/autotest_common.sh@10 -- # set +x 00:13:28.766 19:15:36 -- spdk/autotest.sh@78 -- # rm -f 00:13:28.766 19:15:36 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:28.766 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:28.766 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:13:28.766 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:13:28.766 19:15:37 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:13:28.766 19:15:37 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:13:28.766 19:15:37 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:13:28.766 19:15:37 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:13:28.766 19:15:37 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:28.766 19:15:37 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:13:28.766 19:15:37 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:28.766 19:15:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:28.766 19:15:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:28.766 19:15:37 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:28.766 19:15:37 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:13:28.766 19:15:37 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:13:28.766 19:15:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:13:28.766 19:15:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:28.766 19:15:37 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:28.766 19:15:37 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:13:28.766 19:15:37 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:13:28.766 19:15:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:13:28.766 19:15:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:28.766 19:15:37 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:28.766 19:15:37 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:13:28.766 19:15:37 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:13:28.766 19:15:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:28.766 19:15:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:28.766 19:15:37 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:13:28.766 19:15:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:13:28.766 19:15:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:13:28.766 19:15:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:13:28.766 19:15:37 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:13:28.766 19:15:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:13:28.766 No valid GPT data, bailing 00:13:28.766 19:15:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:13:28.766 19:15:37 -- scripts/common.sh@394 -- # pt= 00:13:28.766 19:15:37 -- scripts/common.sh@395 -- # return 1 00:13:28.766 19:15:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:13:28.766 1+0 records in 00:13:28.766 1+0 records out 00:13:28.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00563521 s, 186 MB/s 00:13:28.766 19:15:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:13:28.766 19:15:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:13:28.766 19:15:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n2 00:13:28.766 19:15:37 -- scripts/common.sh@381 -- # local block=/dev/nvme0n2 pt 00:13:28.766 19:15:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:13:28.766 No valid GPT data, bailing 00:13:28.766 19:15:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:13:28.766 19:15:37 -- scripts/common.sh@394 -- # pt= 00:13:28.766 19:15:37 -- scripts/common.sh@395 -- # return 1 00:13:28.766 19:15:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:13:28.766 1+0 records in 00:13:28.766 1+0 records out 00:13:28.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00478199 s, 219 MB/s 00:13:28.766 19:15:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:13:28.766 19:15:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:13:28.766 19:15:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n3 00:13:28.766 19:15:37 -- scripts/common.sh@381 -- # local block=/dev/nvme0n3 pt 00:13:28.766 19:15:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:13:28.766 No valid GPT data, bailing 00:13:28.766 19:15:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:13:28.766 19:15:37 -- scripts/common.sh@394 -- # pt= 00:13:28.766 19:15:37 -- scripts/common.sh@395 -- # return 1 00:13:28.766 19:15:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:13:28.766 1+0 records in 00:13:28.766 1+0 records out 00:13:28.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00392556 s, 267 MB/s 00:13:28.766 19:15:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:13:28.766 19:15:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:13:28.766 19:15:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:13:28.766 19:15:37 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:13:28.766 19:15:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:13:28.766 No valid GPT data, bailing 00:13:28.766 19:15:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:13:28.766 19:15:37 -- scripts/common.sh@394 -- # pt= 00:13:28.766 19:15:37 -- scripts/common.sh@395 -- # return 1 00:13:28.766 19:15:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:13:28.766 1+0 records in 00:13:28.766 1+0 records out 00:13:28.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00505111 s, 208 MB/s 00:13:28.766 19:15:37 -- spdk/autotest.sh@105 -- # sync 00:13:28.766 19:15:37 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:13:28.766 19:15:37 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:13:28.766 19:15:37 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:13:30.667 19:15:39 -- spdk/autotest.sh@111 -- # uname -s 00:13:30.667 19:15:39 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:13:30.667 19:15:39 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:13:30.667 19:15:39 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:13:31.599 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:31.599 Hugepages 00:13:31.599 node hugesize free / total 00:13:31.599 node0 1048576kB 0 / 0 00:13:31.599 node0 2048kB 0 / 0 00:13:31.599 00:13:31.599 Type BDF Vendor Device NUMA Driver Device Block devices 00:13:31.599 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:13:31.599 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:13:31.599 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:13:31.599 19:15:40 -- spdk/autotest.sh@117 -- # uname -s 00:13:31.599 19:15:40 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:13:31.599 19:15:40 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:13:31.599 19:15:40 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:32.585 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:32.585 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:32.585 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:32.585 19:15:41 -- common/autotest_common.sh@1515 -- # sleep 1 00:13:33.519 19:15:42 -- common/autotest_common.sh@1516 -- # bdfs=() 00:13:33.519 19:15:42 -- common/autotest_common.sh@1516 -- # local bdfs 00:13:33.519 19:15:42 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:13:33.519 19:15:42 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:13:33.519 19:15:42 -- common/autotest_common.sh@1496 -- # bdfs=() 00:13:33.519 19:15:42 -- common/autotest_common.sh@1496 -- # local bdfs 00:13:33.519 19:15:42 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:33.519 19:15:42 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:33.519 19:15:42 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:13:33.519 19:15:42 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:13:33.519 19:15:42 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:13:33.519 19:15:42 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:34.085 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:34.085 Waiting for block devices as requested 00:13:34.085 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:34.085 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:34.344 19:15:43 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:13:34.344 19:15:43 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:13:34.344 19:15:43 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:13:34.344 19:15:43 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:13:34.344 19:15:43 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:13:34.344 19:15:43 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:13:34.344 19:15:43 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:13:34.344 19:15:43 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:13:34.344 19:15:43 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:13:34.344 19:15:43 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:13:34.344 19:15:43 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:13:34.344 19:15:43 -- common/autotest_common.sh@1529 -- # grep oacs 00:13:34.344 19:15:43 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:13:34.344 19:15:43 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:13:34.344 19:15:43 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:13:34.344 19:15:43 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:13:34.344 19:15:43 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:13:34.344 19:15:43 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:13:34.344 19:15:43 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:13:34.344 19:15:43 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:13:34.344 19:15:43 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:13:34.344 19:15:43 -- common/autotest_common.sh@1541 -- # continue 00:13:34.344 19:15:43 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:13:34.344 19:15:43 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:13:34.344 19:15:43 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:13:34.344 19:15:43 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:13:34.344 19:15:43 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:13:34.344 19:15:43 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:13:34.344 19:15:43 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:13:34.344 19:15:43 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:13:34.344 19:15:43 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:13:34.344 19:15:43 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:13:34.344 19:15:43 -- common/autotest_common.sh@1529 -- # grep oacs 00:13:34.344 19:15:43 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:13:34.344 19:15:43 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:13:34.344 19:15:43 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:13:34.344 19:15:43 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:13:34.344 19:15:43 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:13:34.344 19:15:43 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:13:34.344 19:15:43 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:13:34.344 19:15:43 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:13:34.344 19:15:43 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:13:34.344 19:15:43 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:13:34.344 19:15:43 -- common/autotest_common.sh@1541 -- # continue 00:13:34.344 19:15:43 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:13:34.344 19:15:43 -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:34.344 19:15:43 -- common/autotest_common.sh@10 -- # set +x 00:13:34.344 19:15:43 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:13:34.344 19:15:43 -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:34.344 19:15:43 -- common/autotest_common.sh@10 -- # set +x 00:13:34.344 19:15:43 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:34.910 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:35.168 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:35.168 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:35.168 19:15:44 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:13:35.168 19:15:44 -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:35.168 19:15:44 -- common/autotest_common.sh@10 -- # set +x 00:13:35.168 19:15:44 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:13:35.168 19:15:44 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:13:35.168 19:15:44 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:13:35.168 19:15:44 -- common/autotest_common.sh@1561 -- # bdfs=() 00:13:35.168 19:15:44 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:13:35.168 19:15:44 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:13:35.168 19:15:44 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:13:35.168 19:15:44 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:13:35.168 19:15:44 -- common/autotest_common.sh@1496 -- # bdfs=() 00:13:35.168 19:15:44 -- common/autotest_common.sh@1496 -- # local bdfs 00:13:35.168 19:15:44 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:35.168 19:15:44 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:13:35.168 19:15:44 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:35.168 19:15:44 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:13:35.168 19:15:44 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:13:35.168 19:15:44 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:13:35.168 19:15:44 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:13:35.426 19:15:44 -- common/autotest_common.sh@1564 -- # device=0x0010 00:13:35.426 19:15:44 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:13:35.426 19:15:44 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:13:35.426 19:15:44 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:13:35.426 19:15:44 -- common/autotest_common.sh@1564 -- # device=0x0010 00:13:35.426 19:15:44 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:13:35.426 19:15:44 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:13:35.426 19:15:44 -- common/autotest_common.sh@1570 -- # return 0 00:13:35.426 19:15:44 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:13:35.426 19:15:44 -- common/autotest_common.sh@1578 -- # return 0 00:13:35.426 19:15:44 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:13:35.426 19:15:44 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:13:35.426 19:15:44 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:13:35.426 19:15:44 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:13:35.426 19:15:44 -- spdk/autotest.sh@149 -- # timing_enter lib 00:13:35.426 19:15:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:35.426 19:15:44 -- common/autotest_common.sh@10 -- # set +x 00:13:35.426 19:15:44 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:13:35.426 19:15:44 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:13:35.426 19:15:44 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:13:35.426 19:15:44 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:13:35.426 19:15:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:35.426 19:15:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:35.426 19:15:44 -- common/autotest_common.sh@10 -- # set +x 00:13:35.426 ************************************ 00:13:35.426 START TEST env 00:13:35.426 ************************************ 00:13:35.426 19:15:44 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:13:35.426 * Looking for test storage... 00:13:35.426 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:13:35.426 19:15:44 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:35.426 19:15:44 env -- common/autotest_common.sh@1691 -- # lcov --version 00:13:35.426 19:15:44 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:35.426 19:15:44 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:35.426 19:15:44 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:35.426 19:15:44 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:35.426 19:15:44 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:35.426 19:15:44 env -- scripts/common.sh@336 -- # IFS=.-: 00:13:35.426 19:15:44 env -- scripts/common.sh@336 -- # read -ra ver1 00:13:35.426 19:15:44 env -- scripts/common.sh@337 -- # IFS=.-: 00:13:35.426 19:15:44 env -- scripts/common.sh@337 -- # read -ra ver2 00:13:35.426 19:15:44 env -- scripts/common.sh@338 -- # local 'op=<' 00:13:35.426 19:15:44 env -- scripts/common.sh@340 -- # ver1_l=2 00:13:35.426 19:15:44 env -- scripts/common.sh@341 -- # ver2_l=1 00:13:35.426 19:15:44 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:35.426 19:15:44 env -- scripts/common.sh@344 -- # case "$op" in 00:13:35.426 19:15:44 env -- scripts/common.sh@345 -- # : 1 00:13:35.426 19:15:44 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:35.426 19:15:44 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:35.426 19:15:44 env -- scripts/common.sh@365 -- # decimal 1 00:13:35.426 19:15:44 env -- scripts/common.sh@353 -- # local d=1 00:13:35.426 19:15:44 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:35.426 19:15:44 env -- scripts/common.sh@355 -- # echo 1 00:13:35.426 19:15:44 env -- scripts/common.sh@365 -- # ver1[v]=1 00:13:35.426 19:15:44 env -- scripts/common.sh@366 -- # decimal 2 00:13:35.426 19:15:44 env -- scripts/common.sh@353 -- # local d=2 00:13:35.426 19:15:44 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:35.426 19:15:44 env -- scripts/common.sh@355 -- # echo 2 00:13:35.426 19:15:44 env -- scripts/common.sh@366 -- # ver2[v]=2 00:13:35.426 19:15:44 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:35.426 19:15:44 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:35.426 19:15:44 env -- scripts/common.sh@368 -- # return 0 00:13:35.426 19:15:44 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:35.426 19:15:44 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:35.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.426 --rc genhtml_branch_coverage=1 00:13:35.426 --rc genhtml_function_coverage=1 00:13:35.426 --rc genhtml_legend=1 00:13:35.426 --rc geninfo_all_blocks=1 00:13:35.426 --rc geninfo_unexecuted_blocks=1 00:13:35.426 00:13:35.426 ' 00:13:35.426 19:15:44 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:35.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.426 --rc genhtml_branch_coverage=1 00:13:35.426 --rc genhtml_function_coverage=1 00:13:35.426 --rc genhtml_legend=1 00:13:35.426 --rc geninfo_all_blocks=1 00:13:35.426 --rc geninfo_unexecuted_blocks=1 00:13:35.426 00:13:35.426 ' 00:13:35.426 19:15:44 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:35.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.426 --rc genhtml_branch_coverage=1 00:13:35.426 --rc genhtml_function_coverage=1 00:13:35.426 --rc genhtml_legend=1 00:13:35.426 --rc geninfo_all_blocks=1 00:13:35.426 --rc geninfo_unexecuted_blocks=1 00:13:35.426 00:13:35.426 ' 00:13:35.426 19:15:44 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:35.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.426 --rc genhtml_branch_coverage=1 00:13:35.426 --rc genhtml_function_coverage=1 00:13:35.426 --rc genhtml_legend=1 00:13:35.426 --rc geninfo_all_blocks=1 00:13:35.426 --rc geninfo_unexecuted_blocks=1 00:13:35.426 00:13:35.426 ' 00:13:35.426 19:15:44 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:13:35.426 19:15:44 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:35.426 19:15:44 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:35.426 19:15:44 env -- common/autotest_common.sh@10 -- # set +x 00:13:35.426 ************************************ 00:13:35.426 START TEST env_memory 00:13:35.426 ************************************ 00:13:35.426 19:15:44 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:13:35.426 00:13:35.426 00:13:35.426 CUnit - A unit testing framework for C - Version 2.1-3 00:13:35.426 http://cunit.sourceforge.net/ 00:13:35.426 00:13:35.426 00:13:35.426 Suite: memory 00:13:35.684 Test: alloc and free memory map ...[2024-10-17 19:15:44.706547] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:13:35.684 passed 00:13:35.684 Test: mem map translation ...[2024-10-17 19:15:44.737974] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:13:35.684 [2024-10-17 19:15:44.738093] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:13:35.684 [2024-10-17 19:15:44.738169] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:13:35.684 [2024-10-17 19:15:44.738183] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:13:35.684 passed 00:13:35.684 Test: mem map registration ...[2024-10-17 19:15:44.802170] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:13:35.684 [2024-10-17 19:15:44.802225] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:13:35.684 passed 00:13:35.684 Test: mem map adjacent registrations ...passed 00:13:35.685 00:13:35.685 Run Summary: Type Total Ran Passed Failed Inactive 00:13:35.685 suites 1 1 n/a 0 0 00:13:35.685 tests 4 4 4 0 0 00:13:35.685 asserts 152 152 152 0 n/a 00:13:35.685 00:13:35.685 Elapsed time = 0.215 seconds 00:13:35.685 00:13:35.685 real 0m0.235s 00:13:35.685 user 0m0.215s 00:13:35.685 sys 0m0.015s 00:13:35.685 19:15:44 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:35.685 ************************************ 00:13:35.685 END TEST env_memory 00:13:35.685 19:15:44 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:13:35.685 ************************************ 00:13:35.685 19:15:44 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:13:35.685 19:15:44 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:35.685 19:15:44 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:35.685 19:15:44 env -- common/autotest_common.sh@10 -- # set +x 00:13:35.685 ************************************ 00:13:35.685 START TEST env_vtophys 00:13:35.685 ************************************ 00:13:35.685 19:15:44 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:13:35.943 EAL: lib.eal log level changed from notice to debug 00:13:35.943 EAL: Detected lcore 0 as core 0 on socket 0 00:13:35.943 EAL: Detected lcore 1 as core 0 on socket 0 00:13:35.943 EAL: Detected lcore 2 as core 0 on socket 0 00:13:35.943 EAL: Detected lcore 3 as core 0 on socket 0 00:13:35.943 EAL: Detected lcore 4 as core 0 on socket 0 00:13:35.943 EAL: Detected lcore 5 as core 0 on socket 0 00:13:35.943 EAL: Detected lcore 6 as core 0 on socket 0 00:13:35.943 EAL: Detected lcore 7 as core 0 on socket 0 00:13:35.943 EAL: Detected lcore 8 as core 0 on socket 0 00:13:35.943 EAL: Detected lcore 9 as core 0 on socket 0 00:13:35.943 EAL: Maximum logical cores by configuration: 128 00:13:35.943 EAL: Detected CPU lcores: 10 00:13:35.943 EAL: Detected NUMA nodes: 1 00:13:35.943 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:13:35.943 EAL: Detected shared linkage of DPDK 00:13:35.943 EAL: No shared files mode enabled, IPC will be disabled 00:13:35.943 EAL: Selected IOVA mode 'PA' 00:13:35.944 EAL: Probing VFIO support... 00:13:35.944 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:13:35.944 EAL: VFIO modules not loaded, skipping VFIO support... 00:13:35.944 EAL: Ask a virtual area of 0x2e000 bytes 00:13:35.944 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:13:35.944 EAL: Setting up physically contiguous memory... 00:13:35.944 EAL: Setting maximum number of open files to 524288 00:13:35.944 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:13:35.944 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:13:35.944 EAL: Ask a virtual area of 0x61000 bytes 00:13:35.944 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:13:35.944 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:13:35.944 EAL: Ask a virtual area of 0x400000000 bytes 00:13:35.944 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:13:35.944 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:13:35.944 EAL: Ask a virtual area of 0x61000 bytes 00:13:35.944 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:13:35.944 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:13:35.944 EAL: Ask a virtual area of 0x400000000 bytes 00:13:35.944 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:13:35.944 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:13:35.944 EAL: Ask a virtual area of 0x61000 bytes 00:13:35.944 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:13:35.944 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:13:35.944 EAL: Ask a virtual area of 0x400000000 bytes 00:13:35.944 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:13:35.944 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:13:35.944 EAL: Ask a virtual area of 0x61000 bytes 00:13:35.944 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:13:35.944 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:13:35.944 EAL: Ask a virtual area of 0x400000000 bytes 00:13:35.944 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:13:35.944 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:13:35.944 EAL: Hugepages will be freed exactly as allocated. 00:13:35.944 EAL: No shared files mode enabled, IPC is disabled 00:13:35.944 EAL: No shared files mode enabled, IPC is disabled 00:13:35.944 EAL: TSC frequency is ~2200000 KHz 00:13:35.944 EAL: Main lcore 0 is ready (tid=7ff4b5820a00;cpuset=[0]) 00:13:35.944 EAL: Trying to obtain current memory policy. 00:13:35.944 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:35.944 EAL: Restoring previous memory policy: 0 00:13:35.944 EAL: request: mp_malloc_sync 00:13:35.944 EAL: No shared files mode enabled, IPC is disabled 00:13:35.944 EAL: Heap on socket 0 was expanded by 2MB 00:13:35.944 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:13:35.944 EAL: No PCI address specified using 'addr=' in: bus=pci 00:13:35.944 EAL: Mem event callback 'spdk:(nil)' registered 00:13:35.944 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:13:35.944 00:13:35.944 00:13:35.944 CUnit - A unit testing framework for C - Version 2.1-3 00:13:35.944 http://cunit.sourceforge.net/ 00:13:35.944 00:13:35.944 00:13:35.944 Suite: components_suite 00:13:35.944 Test: vtophys_malloc_test ...passed 00:13:35.944 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:13:35.944 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:35.944 EAL: Restoring previous memory policy: 4 00:13:35.944 EAL: Calling mem event callback 'spdk:(nil)' 00:13:35.944 EAL: request: mp_malloc_sync 00:13:35.944 EAL: No shared files mode enabled, IPC is disabled 00:13:35.944 EAL: Heap on socket 0 was expanded by 4MB 00:13:35.944 EAL: Calling mem event callback 'spdk:(nil)' 00:13:35.944 EAL: request: mp_malloc_sync 00:13:35.944 EAL: No shared files mode enabled, IPC is disabled 00:13:35.944 EAL: Heap on socket 0 was shrunk by 4MB 00:13:35.944 EAL: Trying to obtain current memory policy. 00:13:35.944 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:35.944 EAL: Restoring previous memory policy: 4 00:13:35.944 EAL: Calling mem event callback 'spdk:(nil)' 00:13:35.944 EAL: request: mp_malloc_sync 00:13:35.944 EAL: No shared files mode enabled, IPC is disabled 00:13:35.944 EAL: Heap on socket 0 was expanded by 6MB 00:13:35.944 EAL: Calling mem event callback 'spdk:(nil)' 00:13:35.944 EAL: request: mp_malloc_sync 00:13:35.944 EAL: No shared files mode enabled, IPC is disabled 00:13:35.944 EAL: Heap on socket 0 was shrunk by 6MB 00:13:35.944 EAL: Trying to obtain current memory policy. 00:13:35.944 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:35.944 EAL: Restoring previous memory policy: 4 00:13:35.944 EAL: Calling mem event callback 'spdk:(nil)' 00:13:35.944 EAL: request: mp_malloc_sync 00:13:35.944 EAL: No shared files mode enabled, IPC is disabled 00:13:35.944 EAL: Heap on socket 0 was expanded by 10MB 00:13:35.944 EAL: Calling mem event callback 'spdk:(nil)' 00:13:35.944 EAL: request: mp_malloc_sync 00:13:35.944 EAL: No shared files mode enabled, IPC is disabled 00:13:35.944 EAL: Heap on socket 0 was shrunk by 10MB 00:13:35.944 EAL: Trying to obtain current memory policy. 00:13:35.944 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:35.944 EAL: Restoring previous memory policy: 4 00:13:35.944 EAL: Calling mem event callback 'spdk:(nil)' 00:13:35.944 EAL: request: mp_malloc_sync 00:13:35.944 EAL: No shared files mode enabled, IPC is disabled 00:13:35.944 EAL: Heap on socket 0 was expanded by 18MB 00:13:35.944 EAL: Calling mem event callback 'spdk:(nil)' 00:13:35.944 EAL: request: mp_malloc_sync 00:13:35.944 EAL: No shared files mode enabled, IPC is disabled 00:13:35.944 EAL: Heap on socket 0 was shrunk by 18MB 00:13:35.944 EAL: Trying to obtain current memory policy. 00:13:35.944 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:35.944 EAL: Restoring previous memory policy: 4 00:13:35.944 EAL: Calling mem event callback 'spdk:(nil)' 00:13:35.944 EAL: request: mp_malloc_sync 00:13:35.944 EAL: No shared files mode enabled, IPC is disabled 00:13:35.944 EAL: Heap on socket 0 was expanded by 34MB 00:13:35.944 EAL: Calling mem event callback 'spdk:(nil)' 00:13:35.944 EAL: request: mp_malloc_sync 00:13:35.944 EAL: No shared files mode enabled, IPC is disabled 00:13:35.944 EAL: Heap on socket 0 was shrunk by 34MB 00:13:35.944 EAL: Trying to obtain current memory policy. 00:13:35.944 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:35.944 EAL: Restoring previous memory policy: 4 00:13:35.944 EAL: Calling mem event callback 'spdk:(nil)' 00:13:35.944 EAL: request: mp_malloc_sync 00:13:35.944 EAL: No shared files mode enabled, IPC is disabled 00:13:35.944 EAL: Heap on socket 0 was expanded by 66MB 00:13:35.944 EAL: Calling mem event callback 'spdk:(nil)' 00:13:35.944 EAL: request: mp_malloc_sync 00:13:35.944 EAL: No shared files mode enabled, IPC is disabled 00:13:35.944 EAL: Heap on socket 0 was shrunk by 66MB 00:13:35.944 EAL: Trying to obtain current memory policy. 00:13:35.944 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:35.944 EAL: Restoring previous memory policy: 4 00:13:35.944 EAL: Calling mem event callback 'spdk:(nil)' 00:13:35.944 EAL: request: mp_malloc_sync 00:13:35.944 EAL: No shared files mode enabled, IPC is disabled 00:13:35.944 EAL: Heap on socket 0 was expanded by 130MB 00:13:36.202 EAL: Calling mem event callback 'spdk:(nil)' 00:13:36.202 EAL: request: mp_malloc_sync 00:13:36.202 EAL: No shared files mode enabled, IPC is disabled 00:13:36.202 EAL: Heap on socket 0 was shrunk by 130MB 00:13:36.202 EAL: Trying to obtain current memory policy. 00:13:36.202 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:36.202 EAL: Restoring previous memory policy: 4 00:13:36.202 EAL: Calling mem event callback 'spdk:(nil)' 00:13:36.202 EAL: request: mp_malloc_sync 00:13:36.202 EAL: No shared files mode enabled, IPC is disabled 00:13:36.202 EAL: Heap on socket 0 was expanded by 258MB 00:13:36.202 EAL: Calling mem event callback 'spdk:(nil)' 00:13:36.202 EAL: request: mp_malloc_sync 00:13:36.202 EAL: No shared files mode enabled, IPC is disabled 00:13:36.202 EAL: Heap on socket 0 was shrunk by 258MB 00:13:36.202 EAL: Trying to obtain current memory policy. 00:13:36.202 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:36.461 EAL: Restoring previous memory policy: 4 00:13:36.461 EAL: Calling mem event callback 'spdk:(nil)' 00:13:36.461 EAL: request: mp_malloc_sync 00:13:36.461 EAL: No shared files mode enabled, IPC is disabled 00:13:36.461 EAL: Heap on socket 0 was expanded by 514MB 00:13:36.461 EAL: Calling mem event callback 'spdk:(nil)' 00:13:36.719 EAL: request: mp_malloc_sync 00:13:36.719 EAL: No shared files mode enabled, IPC is disabled 00:13:36.719 EAL: Heap on socket 0 was shrunk by 514MB 00:13:36.719 EAL: Trying to obtain current memory policy. 00:13:36.719 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:36.978 EAL: Restoring previous memory policy: 4 00:13:36.978 EAL: Calling mem event callback 'spdk:(nil)' 00:13:36.978 EAL: request: mp_malloc_sync 00:13:36.978 EAL: No shared files mode enabled, IPC is disabled 00:13:36.978 EAL: Heap on socket 0 was expanded by 1026MB 00:13:36.978 EAL: Calling mem event callback 'spdk:(nil)' 00:13:37.297 passed 00:13:37.297 00:13:37.297 Run Summary: Type Total Ran Passed Failed Inactive 00:13:37.297 suites 1 1 n/a 0 0 00:13:37.297 tests 2 2 2 0 0 00:13:37.297 asserts 5484 5484 5484 0 n/a 00:13:37.297 00:13:37.297 Elapsed time = 1.291 seconds 00:13:37.297 EAL: request: mp_malloc_sync 00:13:37.297 EAL: No shared files mode enabled, IPC is disabled 00:13:37.297 EAL: Heap on socket 0 was shrunk by 1026MB 00:13:37.297 EAL: Calling mem event callback 'spdk:(nil)' 00:13:37.297 EAL: request: mp_malloc_sync 00:13:37.297 EAL: No shared files mode enabled, IPC is disabled 00:13:37.297 EAL: Heap on socket 0 was shrunk by 2MB 00:13:37.297 EAL: No shared files mode enabled, IPC is disabled 00:13:37.297 EAL: No shared files mode enabled, IPC is disabled 00:13:37.297 EAL: No shared files mode enabled, IPC is disabled 00:13:37.297 ************************************ 00:13:37.297 END TEST env_vtophys 00:13:37.297 ************************************ 00:13:37.297 00:13:37.297 real 0m1.497s 00:13:37.297 user 0m0.823s 00:13:37.297 sys 0m0.529s 00:13:37.297 19:15:46 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:37.297 19:15:46 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:13:37.297 19:15:46 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:13:37.297 19:15:46 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:37.298 19:15:46 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:37.298 19:15:46 env -- common/autotest_common.sh@10 -- # set +x 00:13:37.298 ************************************ 00:13:37.298 START TEST env_pci 00:13:37.298 ************************************ 00:13:37.298 19:15:46 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:13:37.298 00:13:37.298 00:13:37.298 CUnit - A unit testing framework for C - Version 2.1-3 00:13:37.298 http://cunit.sourceforge.net/ 00:13:37.298 00:13:37.298 00:13:37.298 Suite: pci 00:13:37.298 Test: pci_hook ...[2024-10-17 19:15:46.498415] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1111:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56734 has claimed it 00:13:37.298 passed 00:13:37.298 00:13:37.298 Run Summary: Type Total Ran Passed Failed Inactive 00:13:37.298 suites 1 1 n/a 0 0 00:13:37.298 tests 1 1 1 0 0 00:13:37.298 asserts 25 25 25 0 n/a 00:13:37.298 00:13:37.298 Elapsed time = 0.002 seconds 00:13:37.298 EAL: Cannot find device (10000:00:01.0) 00:13:37.298 EAL: Failed to attach device on primary process 00:13:37.298 00:13:37.298 real 0m0.022s 00:13:37.298 user 0m0.008s 00:13:37.298 sys 0m0.014s 00:13:37.298 19:15:46 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:37.298 19:15:46 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:13:37.298 ************************************ 00:13:37.298 END TEST env_pci 00:13:37.298 ************************************ 00:13:37.298 19:15:46 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:13:37.298 19:15:46 env -- env/env.sh@15 -- # uname 00:13:37.556 19:15:46 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:13:37.556 19:15:46 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:13:37.557 19:15:46 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:13:37.557 19:15:46 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:37.557 19:15:46 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:37.557 19:15:46 env -- common/autotest_common.sh@10 -- # set +x 00:13:37.557 ************************************ 00:13:37.557 START TEST env_dpdk_post_init 00:13:37.557 ************************************ 00:13:37.557 19:15:46 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:13:37.557 EAL: Detected CPU lcores: 10 00:13:37.557 EAL: Detected NUMA nodes: 1 00:13:37.557 EAL: Detected shared linkage of DPDK 00:13:37.557 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:13:37.557 EAL: Selected IOVA mode 'PA' 00:13:37.557 TELEMETRY: No legacy callbacks, legacy socket not created 00:13:37.557 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:13:37.557 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:13:37.557 Starting DPDK initialization... 00:13:37.557 Starting SPDK post initialization... 00:13:37.557 SPDK NVMe probe 00:13:37.557 Attaching to 0000:00:10.0 00:13:37.557 Attaching to 0000:00:11.0 00:13:37.557 Attached to 0000:00:10.0 00:13:37.557 Attached to 0000:00:11.0 00:13:37.557 Cleaning up... 00:13:37.557 00:13:37.557 real 0m0.221s 00:13:37.557 user 0m0.085s 00:13:37.557 sys 0m0.036s 00:13:37.557 19:15:46 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:37.557 ************************************ 00:13:37.557 END TEST env_dpdk_post_init 00:13:37.557 ************************************ 00:13:37.557 19:15:46 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:13:37.815 19:15:46 env -- env/env.sh@26 -- # uname 00:13:37.815 19:15:46 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:13:37.815 19:15:46 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:13:37.815 19:15:46 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:37.815 19:15:46 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:37.815 19:15:46 env -- common/autotest_common.sh@10 -- # set +x 00:13:37.815 ************************************ 00:13:37.815 START TEST env_mem_callbacks 00:13:37.815 ************************************ 00:13:37.815 19:15:46 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:13:37.815 EAL: Detected CPU lcores: 10 00:13:37.815 EAL: Detected NUMA nodes: 1 00:13:37.815 EAL: Detected shared linkage of DPDK 00:13:37.815 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:13:37.815 EAL: Selected IOVA mode 'PA' 00:13:37.815 TELEMETRY: No legacy callbacks, legacy socket not created 00:13:37.815 00:13:37.815 00:13:37.815 CUnit - A unit testing framework for C - Version 2.1-3 00:13:37.815 http://cunit.sourceforge.net/ 00:13:37.815 00:13:37.815 00:13:37.815 Suite: memory 00:13:37.815 Test: test ... 00:13:37.815 register 0x200000200000 2097152 00:13:37.815 malloc 3145728 00:13:37.815 register 0x200000400000 4194304 00:13:37.815 buf 0x200000500000 len 3145728 PASSED 00:13:37.815 malloc 64 00:13:37.815 buf 0x2000004fff40 len 64 PASSED 00:13:37.815 malloc 4194304 00:13:37.815 register 0x200000800000 6291456 00:13:37.815 buf 0x200000a00000 len 4194304 PASSED 00:13:37.815 free 0x200000500000 3145728 00:13:37.815 free 0x2000004fff40 64 00:13:37.815 unregister 0x200000400000 4194304 PASSED 00:13:37.815 free 0x200000a00000 4194304 00:13:37.815 unregister 0x200000800000 6291456 PASSED 00:13:37.815 malloc 8388608 00:13:37.815 register 0x200000400000 10485760 00:13:37.815 buf 0x200000600000 len 8388608 PASSED 00:13:37.815 free 0x200000600000 8388608 00:13:37.815 unregister 0x200000400000 10485760 PASSED 00:13:37.815 passed 00:13:37.815 00:13:37.815 Run Summary: Type Total Ran Passed Failed Inactive 00:13:37.815 suites 1 1 n/a 0 0 00:13:37.815 tests 1 1 1 0 0 00:13:37.815 asserts 15 15 15 0 n/a 00:13:37.815 00:13:37.815 Elapsed time = 0.006 seconds 00:13:37.815 00:13:37.815 real 0m0.144s 00:13:37.815 user 0m0.016s 00:13:37.815 sys 0m0.026s 00:13:37.815 19:15:46 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:37.815 ************************************ 00:13:37.815 END TEST env_mem_callbacks 00:13:37.815 ************************************ 00:13:37.815 19:15:46 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:13:37.815 ************************************ 00:13:37.815 END TEST env 00:13:37.815 ************************************ 00:13:37.815 00:13:37.815 real 0m2.569s 00:13:37.815 user 0m1.353s 00:13:37.815 sys 0m0.862s 00:13:37.815 19:15:47 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:37.815 19:15:47 env -- common/autotest_common.sh@10 -- # set +x 00:13:37.815 19:15:47 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:13:37.815 19:15:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:37.815 19:15:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:37.815 19:15:47 -- common/autotest_common.sh@10 -- # set +x 00:13:37.815 ************************************ 00:13:37.815 START TEST rpc 00:13:37.815 ************************************ 00:13:37.815 19:15:47 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:13:38.073 * Looking for test storage... 00:13:38.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:13:38.073 19:15:47 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:38.073 19:15:47 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:13:38.073 19:15:47 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:38.073 19:15:47 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:38.073 19:15:47 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:38.073 19:15:47 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:38.073 19:15:47 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:38.073 19:15:47 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:38.073 19:15:47 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:38.073 19:15:47 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:38.073 19:15:47 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:38.073 19:15:47 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:38.073 19:15:47 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:38.073 19:15:47 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:38.073 19:15:47 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:38.073 19:15:47 rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:38.073 19:15:47 rpc -- scripts/common.sh@345 -- # : 1 00:13:38.073 19:15:47 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:38.073 19:15:47 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:38.073 19:15:47 rpc -- scripts/common.sh@365 -- # decimal 1 00:13:38.073 19:15:47 rpc -- scripts/common.sh@353 -- # local d=1 00:13:38.073 19:15:47 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:38.073 19:15:47 rpc -- scripts/common.sh@355 -- # echo 1 00:13:38.073 19:15:47 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:38.074 19:15:47 rpc -- scripts/common.sh@366 -- # decimal 2 00:13:38.074 19:15:47 rpc -- scripts/common.sh@353 -- # local d=2 00:13:38.074 19:15:47 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:38.074 19:15:47 rpc -- scripts/common.sh@355 -- # echo 2 00:13:38.074 19:15:47 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:38.074 19:15:47 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:38.074 19:15:47 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:38.074 19:15:47 rpc -- scripts/common.sh@368 -- # return 0 00:13:38.074 19:15:47 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:38.074 19:15:47 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:38.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.074 --rc genhtml_branch_coverage=1 00:13:38.074 --rc genhtml_function_coverage=1 00:13:38.074 --rc genhtml_legend=1 00:13:38.074 --rc geninfo_all_blocks=1 00:13:38.074 --rc geninfo_unexecuted_blocks=1 00:13:38.074 00:13:38.074 ' 00:13:38.074 19:15:47 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:38.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.074 --rc genhtml_branch_coverage=1 00:13:38.074 --rc genhtml_function_coverage=1 00:13:38.074 --rc genhtml_legend=1 00:13:38.074 --rc geninfo_all_blocks=1 00:13:38.074 --rc geninfo_unexecuted_blocks=1 00:13:38.074 00:13:38.074 ' 00:13:38.074 19:15:47 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:38.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.074 --rc genhtml_branch_coverage=1 00:13:38.074 --rc genhtml_function_coverage=1 00:13:38.074 --rc genhtml_legend=1 00:13:38.074 --rc geninfo_all_blocks=1 00:13:38.074 --rc geninfo_unexecuted_blocks=1 00:13:38.074 00:13:38.074 ' 00:13:38.074 19:15:47 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:38.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.074 --rc genhtml_branch_coverage=1 00:13:38.074 --rc genhtml_function_coverage=1 00:13:38.074 --rc genhtml_legend=1 00:13:38.074 --rc geninfo_all_blocks=1 00:13:38.074 --rc geninfo_unexecuted_blocks=1 00:13:38.074 00:13:38.074 ' 00:13:38.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.074 19:15:47 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56857 00:13:38.074 19:15:47 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:13:38.074 19:15:47 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:13:38.074 19:15:47 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56857 00:13:38.074 19:15:47 rpc -- common/autotest_common.sh@831 -- # '[' -z 56857 ']' 00:13:38.074 19:15:47 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.074 19:15:47 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:38.074 19:15:47 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.074 19:15:47 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:38.074 19:15:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.332 [2024-10-17 19:15:47.333532] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:13:38.332 [2024-10-17 19:15:47.333656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56857 ] 00:13:38.332 [2024-10-17 19:15:47.490625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.332 [2024-10-17 19:15:47.556055] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:13:38.332 [2024-10-17 19:15:47.556144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56857' to capture a snapshot of events at runtime. 00:13:38.332 [2024-10-17 19:15:47.556159] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:38.332 [2024-10-17 19:15:47.556168] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:38.332 [2024-10-17 19:15:47.556176] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56857 for offline analysis/debug. 00:13:38.332 [2024-10-17 19:15:47.556669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.590 [2024-10-17 19:15:47.629027] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:38.590 19:15:47 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:38.590 19:15:47 rpc -- common/autotest_common.sh@864 -- # return 0 00:13:38.590 19:15:47 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:13:38.590 19:15:47 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:13:38.590 19:15:47 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:13:38.590 19:15:47 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:13:38.590 19:15:47 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:38.590 19:15:47 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:38.590 19:15:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.590 ************************************ 00:13:38.590 START TEST rpc_integrity 00:13:38.590 ************************************ 00:13:38.590 19:15:47 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:13:38.590 19:15:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:38.590 19:15:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.590 19:15:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:38.849 19:15:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.849 19:15:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:13:38.849 19:15:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:13:38.849 19:15:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:13:38.849 19:15:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:13:38.849 19:15:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.849 19:15:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:38.849 19:15:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.849 19:15:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:13:38.849 19:15:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:13:38.849 19:15:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.849 19:15:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:38.849 19:15:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.849 19:15:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:13:38.849 { 00:13:38.849 "name": "Malloc0", 00:13:38.849 "aliases": [ 00:13:38.849 "b1baa103-120e-4aea-a9bb-c15afef8d3be" 00:13:38.849 ], 00:13:38.849 "product_name": "Malloc disk", 00:13:38.849 "block_size": 512, 00:13:38.849 "num_blocks": 16384, 00:13:38.849 "uuid": "b1baa103-120e-4aea-a9bb-c15afef8d3be", 00:13:38.849 "assigned_rate_limits": { 00:13:38.849 "rw_ios_per_sec": 0, 00:13:38.849 "rw_mbytes_per_sec": 0, 00:13:38.849 "r_mbytes_per_sec": 0, 00:13:38.849 "w_mbytes_per_sec": 0 00:13:38.849 }, 00:13:38.849 "claimed": false, 00:13:38.849 "zoned": false, 00:13:38.849 "supported_io_types": { 00:13:38.849 "read": true, 00:13:38.849 "write": true, 00:13:38.849 "unmap": true, 00:13:38.849 "flush": true, 00:13:38.849 "reset": true, 00:13:38.849 "nvme_admin": false, 00:13:38.849 "nvme_io": false, 00:13:38.849 "nvme_io_md": false, 00:13:38.849 "write_zeroes": true, 00:13:38.849 "zcopy": true, 00:13:38.849 "get_zone_info": false, 00:13:38.849 "zone_management": false, 00:13:38.849 "zone_append": false, 00:13:38.849 "compare": false, 00:13:38.849 "compare_and_write": false, 00:13:38.849 "abort": true, 00:13:38.849 "seek_hole": false, 00:13:38.849 "seek_data": false, 00:13:38.849 "copy": true, 00:13:38.849 "nvme_iov_md": false 00:13:38.849 }, 00:13:38.849 "memory_domains": [ 00:13:38.849 { 00:13:38.849 "dma_device_id": "system", 00:13:38.849 "dma_device_type": 1 00:13:38.850 }, 00:13:38.850 { 00:13:38.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.850 "dma_device_type": 2 00:13:38.850 } 00:13:38.850 ], 00:13:38.850 "driver_specific": {} 00:13:38.850 } 00:13:38.850 ]' 00:13:38.850 19:15:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:13:38.850 19:15:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:13:38.850 19:15:47 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:13:38.850 19:15:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.850 19:15:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:38.850 [2024-10-17 19:15:47.984153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:13:38.850 [2024-10-17 19:15:47.984221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.850 [2024-10-17 19:15:47.984246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b80120 00:13:38.850 [2024-10-17 19:15:47.984274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.850 [2024-10-17 19:15:47.986090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.850 [2024-10-17 19:15:47.986141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:13:38.850 Passthru0 00:13:38.850 19:15:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.850 19:15:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:13:38.850 19:15:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.850 19:15:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:38.850 19:15:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.850 19:15:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:13:38.850 { 00:13:38.850 "name": "Malloc0", 00:13:38.850 "aliases": [ 00:13:38.850 "b1baa103-120e-4aea-a9bb-c15afef8d3be" 00:13:38.850 ], 00:13:38.850 "product_name": "Malloc disk", 00:13:38.850 "block_size": 512, 00:13:38.850 "num_blocks": 16384, 00:13:38.850 "uuid": "b1baa103-120e-4aea-a9bb-c15afef8d3be", 00:13:38.850 "assigned_rate_limits": { 00:13:38.850 "rw_ios_per_sec": 0, 00:13:38.850 "rw_mbytes_per_sec": 0, 00:13:38.850 "r_mbytes_per_sec": 0, 00:13:38.850 "w_mbytes_per_sec": 0 00:13:38.850 }, 00:13:38.850 "claimed": true, 00:13:38.850 "claim_type": "exclusive_write", 00:13:38.850 "zoned": false, 00:13:38.850 "supported_io_types": { 00:13:38.850 "read": true, 00:13:38.850 "write": true, 00:13:38.850 "unmap": true, 00:13:38.850 "flush": true, 00:13:38.850 "reset": true, 00:13:38.850 "nvme_admin": false, 00:13:38.850 "nvme_io": false, 00:13:38.850 "nvme_io_md": false, 00:13:38.850 "write_zeroes": true, 00:13:38.850 "zcopy": true, 00:13:38.850 "get_zone_info": false, 00:13:38.850 "zone_management": false, 00:13:38.850 "zone_append": false, 00:13:38.850 "compare": false, 00:13:38.850 "compare_and_write": false, 00:13:38.850 "abort": true, 00:13:38.850 "seek_hole": false, 00:13:38.850 "seek_data": false, 00:13:38.850 "copy": true, 00:13:38.850 "nvme_iov_md": false 00:13:38.850 }, 00:13:38.850 "memory_domains": [ 00:13:38.850 { 00:13:38.850 "dma_device_id": "system", 00:13:38.850 "dma_device_type": 1 00:13:38.850 }, 00:13:38.850 { 00:13:38.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.850 "dma_device_type": 2 00:13:38.850 } 00:13:38.850 ], 00:13:38.850 "driver_specific": {} 00:13:38.850 }, 00:13:38.850 { 00:13:38.850 "name": "Passthru0", 00:13:38.850 "aliases": [ 00:13:38.850 "1bb2cce2-334d-5fef-8cae-cc0e3363236b" 00:13:38.850 ], 00:13:38.850 "product_name": "passthru", 00:13:38.850 "block_size": 512, 00:13:38.850 "num_blocks": 16384, 00:13:38.850 "uuid": "1bb2cce2-334d-5fef-8cae-cc0e3363236b", 00:13:38.850 "assigned_rate_limits": { 00:13:38.850 "rw_ios_per_sec": 0, 00:13:38.850 "rw_mbytes_per_sec": 0, 00:13:38.850 "r_mbytes_per_sec": 0, 00:13:38.850 "w_mbytes_per_sec": 0 00:13:38.850 }, 00:13:38.850 "claimed": false, 00:13:38.850 "zoned": false, 00:13:38.850 "supported_io_types": { 00:13:38.850 "read": true, 00:13:38.850 "write": true, 00:13:38.850 "unmap": true, 00:13:38.850 "flush": true, 00:13:38.850 "reset": true, 00:13:38.850 "nvme_admin": false, 00:13:38.850 "nvme_io": false, 00:13:38.850 "nvme_io_md": false, 00:13:38.850 "write_zeroes": true, 00:13:38.850 "zcopy": true, 00:13:38.850 "get_zone_info": false, 00:13:38.850 "zone_management": false, 00:13:38.850 "zone_append": false, 00:13:38.850 "compare": false, 00:13:38.850 "compare_and_write": false, 00:13:38.850 "abort": true, 00:13:38.850 "seek_hole": false, 00:13:38.850 "seek_data": false, 00:13:38.850 "copy": true, 00:13:38.850 "nvme_iov_md": false 00:13:38.850 }, 00:13:38.850 "memory_domains": [ 00:13:38.850 { 00:13:38.850 "dma_device_id": "system", 00:13:38.850 "dma_device_type": 1 00:13:38.850 }, 00:13:38.850 { 00:13:38.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.850 "dma_device_type": 2 00:13:38.850 } 00:13:38.850 ], 00:13:38.850 "driver_specific": { 00:13:38.850 "passthru": { 00:13:38.850 "name": "Passthru0", 00:13:38.850 "base_bdev_name": "Malloc0" 00:13:38.850 } 00:13:38.850 } 00:13:38.850 } 00:13:38.850 ]' 00:13:38.850 19:15:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:13:38.850 19:15:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:13:38.850 19:15:48 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:13:38.850 19:15:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.850 19:15:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:38.850 19:15:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.850 19:15:48 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:38.850 19:15:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.850 19:15:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:38.850 19:15:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.850 19:15:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:38.850 19:15:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.850 19:15:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:38.850 19:15:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.850 19:15:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:13:38.850 19:15:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:13:39.108 ************************************ 00:13:39.108 END TEST rpc_integrity 00:13:39.108 ************************************ 00:13:39.108 19:15:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:13:39.108 00:13:39.109 real 0m0.305s 00:13:39.109 user 0m0.188s 00:13:39.109 sys 0m0.045s 00:13:39.109 19:15:48 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:39.109 19:15:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:39.109 19:15:48 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:13:39.109 19:15:48 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:39.109 19:15:48 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:39.109 19:15:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.109 ************************************ 00:13:39.109 START TEST rpc_plugins 00:13:39.109 ************************************ 00:13:39.109 19:15:48 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:13:39.109 19:15:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:13:39.109 19:15:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.109 19:15:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:13:39.109 19:15:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.109 19:15:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:13:39.109 19:15:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:13:39.109 19:15:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.109 19:15:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:13:39.109 19:15:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.109 19:15:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:13:39.109 { 00:13:39.109 "name": "Malloc1", 00:13:39.109 "aliases": [ 00:13:39.109 "5514eba7-fc87-4032-a12f-26efc69d4399" 00:13:39.109 ], 00:13:39.109 "product_name": "Malloc disk", 00:13:39.109 "block_size": 4096, 00:13:39.109 "num_blocks": 256, 00:13:39.109 "uuid": "5514eba7-fc87-4032-a12f-26efc69d4399", 00:13:39.109 "assigned_rate_limits": { 00:13:39.109 "rw_ios_per_sec": 0, 00:13:39.109 "rw_mbytes_per_sec": 0, 00:13:39.109 "r_mbytes_per_sec": 0, 00:13:39.109 "w_mbytes_per_sec": 0 00:13:39.109 }, 00:13:39.109 "claimed": false, 00:13:39.109 "zoned": false, 00:13:39.109 "supported_io_types": { 00:13:39.109 "read": true, 00:13:39.109 "write": true, 00:13:39.109 "unmap": true, 00:13:39.109 "flush": true, 00:13:39.109 "reset": true, 00:13:39.109 "nvme_admin": false, 00:13:39.109 "nvme_io": false, 00:13:39.109 "nvme_io_md": false, 00:13:39.109 "write_zeroes": true, 00:13:39.109 "zcopy": true, 00:13:39.109 "get_zone_info": false, 00:13:39.109 "zone_management": false, 00:13:39.109 "zone_append": false, 00:13:39.109 "compare": false, 00:13:39.109 "compare_and_write": false, 00:13:39.109 "abort": true, 00:13:39.109 "seek_hole": false, 00:13:39.109 "seek_data": false, 00:13:39.109 "copy": true, 00:13:39.109 "nvme_iov_md": false 00:13:39.109 }, 00:13:39.109 "memory_domains": [ 00:13:39.109 { 00:13:39.109 "dma_device_id": "system", 00:13:39.109 "dma_device_type": 1 00:13:39.109 }, 00:13:39.109 { 00:13:39.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.109 "dma_device_type": 2 00:13:39.109 } 00:13:39.109 ], 00:13:39.109 "driver_specific": {} 00:13:39.109 } 00:13:39.109 ]' 00:13:39.109 19:15:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:13:39.109 19:15:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:13:39.109 19:15:48 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:13:39.109 19:15:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.109 19:15:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:13:39.109 19:15:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.109 19:15:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:13:39.109 19:15:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.109 19:15:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:13:39.109 19:15:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.109 19:15:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:13:39.109 19:15:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:13:39.109 ************************************ 00:13:39.109 END TEST rpc_plugins 00:13:39.109 ************************************ 00:13:39.109 19:15:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:13:39.109 00:13:39.109 real 0m0.161s 00:13:39.109 user 0m0.110s 00:13:39.109 sys 0m0.016s 00:13:39.109 19:15:48 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:39.109 19:15:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:13:39.368 19:15:48 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:13:39.368 19:15:48 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:39.368 19:15:48 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:39.368 19:15:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.368 ************************************ 00:13:39.368 START TEST rpc_trace_cmd_test 00:13:39.368 ************************************ 00:13:39.368 19:15:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:13:39.368 19:15:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:13:39.368 19:15:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:13:39.368 19:15:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.368 19:15:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.368 19:15:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.368 19:15:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:13:39.368 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56857", 00:13:39.368 "tpoint_group_mask": "0x8", 00:13:39.368 "iscsi_conn": { 00:13:39.368 "mask": "0x2", 00:13:39.368 "tpoint_mask": "0x0" 00:13:39.368 }, 00:13:39.368 "scsi": { 00:13:39.368 "mask": "0x4", 00:13:39.368 "tpoint_mask": "0x0" 00:13:39.368 }, 00:13:39.368 "bdev": { 00:13:39.368 "mask": "0x8", 00:13:39.368 "tpoint_mask": "0xffffffffffffffff" 00:13:39.368 }, 00:13:39.368 "nvmf_rdma": { 00:13:39.368 "mask": "0x10", 00:13:39.368 "tpoint_mask": "0x0" 00:13:39.368 }, 00:13:39.368 "nvmf_tcp": { 00:13:39.368 "mask": "0x20", 00:13:39.368 "tpoint_mask": "0x0" 00:13:39.368 }, 00:13:39.368 "ftl": { 00:13:39.368 "mask": "0x40", 00:13:39.368 "tpoint_mask": "0x0" 00:13:39.368 }, 00:13:39.368 "blobfs": { 00:13:39.368 "mask": "0x80", 00:13:39.368 "tpoint_mask": "0x0" 00:13:39.368 }, 00:13:39.368 "dsa": { 00:13:39.368 "mask": "0x200", 00:13:39.368 "tpoint_mask": "0x0" 00:13:39.368 }, 00:13:39.368 "thread": { 00:13:39.368 "mask": "0x400", 00:13:39.368 "tpoint_mask": "0x0" 00:13:39.368 }, 00:13:39.368 "nvme_pcie": { 00:13:39.368 "mask": "0x800", 00:13:39.368 "tpoint_mask": "0x0" 00:13:39.368 }, 00:13:39.368 "iaa": { 00:13:39.368 "mask": "0x1000", 00:13:39.368 "tpoint_mask": "0x0" 00:13:39.368 }, 00:13:39.368 "nvme_tcp": { 00:13:39.368 "mask": "0x2000", 00:13:39.368 "tpoint_mask": "0x0" 00:13:39.368 }, 00:13:39.368 "bdev_nvme": { 00:13:39.368 "mask": "0x4000", 00:13:39.368 "tpoint_mask": "0x0" 00:13:39.368 }, 00:13:39.368 "sock": { 00:13:39.368 "mask": "0x8000", 00:13:39.368 "tpoint_mask": "0x0" 00:13:39.368 }, 00:13:39.368 "blob": { 00:13:39.368 "mask": "0x10000", 00:13:39.368 "tpoint_mask": "0x0" 00:13:39.368 }, 00:13:39.368 "bdev_raid": { 00:13:39.368 "mask": "0x20000", 00:13:39.368 "tpoint_mask": "0x0" 00:13:39.368 }, 00:13:39.368 "scheduler": { 00:13:39.368 "mask": "0x40000", 00:13:39.368 "tpoint_mask": "0x0" 00:13:39.368 } 00:13:39.368 }' 00:13:39.368 19:15:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:13:39.368 19:15:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:13:39.368 19:15:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:13:39.368 19:15:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:13:39.368 19:15:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:13:39.368 19:15:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:13:39.368 19:15:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:13:39.679 19:15:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:13:39.679 19:15:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:13:39.679 ************************************ 00:13:39.679 END TEST rpc_trace_cmd_test 00:13:39.679 ************************************ 00:13:39.679 19:15:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:13:39.679 00:13:39.679 real 0m0.284s 00:13:39.679 user 0m0.237s 00:13:39.679 sys 0m0.037s 00:13:39.679 19:15:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:39.679 19:15:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.679 19:15:48 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:13:39.679 19:15:48 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:13:39.679 19:15:48 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:13:39.679 19:15:48 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:39.679 19:15:48 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:39.679 19:15:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.679 ************************************ 00:13:39.679 START TEST rpc_daemon_integrity 00:13:39.679 ************************************ 00:13:39.679 19:15:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:13:39.679 19:15:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:39.679 19:15:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.679 19:15:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:39.679 19:15:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.679 19:15:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:13:39.679 19:15:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:13:39.679 19:15:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:13:39.679 19:15:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:13:39.679 19:15:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.679 19:15:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:39.679 19:15:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.679 19:15:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:13:39.679 19:15:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:13:39.679 19:15:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.679 19:15:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:39.679 19:15:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.679 19:15:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:13:39.679 { 00:13:39.679 "name": "Malloc2", 00:13:39.679 "aliases": [ 00:13:39.679 "1b1b957d-58b9-4c6e-bba2-ae460ec53419" 00:13:39.679 ], 00:13:39.679 "product_name": "Malloc disk", 00:13:39.679 "block_size": 512, 00:13:39.679 "num_blocks": 16384, 00:13:39.679 "uuid": "1b1b957d-58b9-4c6e-bba2-ae460ec53419", 00:13:39.679 "assigned_rate_limits": { 00:13:39.679 "rw_ios_per_sec": 0, 00:13:39.679 "rw_mbytes_per_sec": 0, 00:13:39.679 "r_mbytes_per_sec": 0, 00:13:39.679 "w_mbytes_per_sec": 0 00:13:39.679 }, 00:13:39.679 "claimed": false, 00:13:39.679 "zoned": false, 00:13:39.679 "supported_io_types": { 00:13:39.679 "read": true, 00:13:39.679 "write": true, 00:13:39.679 "unmap": true, 00:13:39.679 "flush": true, 00:13:39.679 "reset": true, 00:13:39.679 "nvme_admin": false, 00:13:39.679 "nvme_io": false, 00:13:39.679 "nvme_io_md": false, 00:13:39.679 "write_zeroes": true, 00:13:39.679 "zcopy": true, 00:13:39.679 "get_zone_info": false, 00:13:39.679 "zone_management": false, 00:13:39.679 "zone_append": false, 00:13:39.679 "compare": false, 00:13:39.679 "compare_and_write": false, 00:13:39.679 "abort": true, 00:13:39.679 "seek_hole": false, 00:13:39.679 "seek_data": false, 00:13:39.679 "copy": true, 00:13:39.679 "nvme_iov_md": false 00:13:39.679 }, 00:13:39.679 "memory_domains": [ 00:13:39.679 { 00:13:39.679 "dma_device_id": "system", 00:13:39.679 "dma_device_type": 1 00:13:39.679 }, 00:13:39.679 { 00:13:39.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.679 "dma_device_type": 2 00:13:39.679 } 00:13:39.679 ], 00:13:39.679 "driver_specific": {} 00:13:39.679 } 00:13:39.679 ]' 00:13:39.679 19:15:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:13:39.679 19:15:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:13:39.679 19:15:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:13:39.679 19:15:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.679 19:15:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:39.679 [2024-10-17 19:15:48.913349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:13:39.679 [2024-10-17 19:15:48.913418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.679 [2024-10-17 19:15:48.913441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b8da90 00:13:39.679 [2024-10-17 19:15:48.913453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.679 [2024-10-17 19:15:48.915730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.679 [2024-10-17 19:15:48.915899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:13:39.679 Passthru0 00:13:39.679 19:15:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.679 19:15:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:13:39.679 19:15:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.679 19:15:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:39.939 19:15:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.939 19:15:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:13:39.939 { 00:13:39.939 "name": "Malloc2", 00:13:39.939 "aliases": [ 00:13:39.939 "1b1b957d-58b9-4c6e-bba2-ae460ec53419" 00:13:39.939 ], 00:13:39.939 "product_name": "Malloc disk", 00:13:39.939 "block_size": 512, 00:13:39.939 "num_blocks": 16384, 00:13:39.939 "uuid": "1b1b957d-58b9-4c6e-bba2-ae460ec53419", 00:13:39.939 "assigned_rate_limits": { 00:13:39.939 "rw_ios_per_sec": 0, 00:13:39.939 "rw_mbytes_per_sec": 0, 00:13:39.939 "r_mbytes_per_sec": 0, 00:13:39.939 "w_mbytes_per_sec": 0 00:13:39.939 }, 00:13:39.939 "claimed": true, 00:13:39.939 "claim_type": "exclusive_write", 00:13:39.939 "zoned": false, 00:13:39.939 "supported_io_types": { 00:13:39.939 "read": true, 00:13:39.939 "write": true, 00:13:39.939 "unmap": true, 00:13:39.939 "flush": true, 00:13:39.939 "reset": true, 00:13:39.939 "nvme_admin": false, 00:13:39.939 "nvme_io": false, 00:13:39.939 "nvme_io_md": false, 00:13:39.939 "write_zeroes": true, 00:13:39.939 "zcopy": true, 00:13:39.939 "get_zone_info": false, 00:13:39.939 "zone_management": false, 00:13:39.939 "zone_append": false, 00:13:39.939 "compare": false, 00:13:39.939 "compare_and_write": false, 00:13:39.939 "abort": true, 00:13:39.939 "seek_hole": false, 00:13:39.939 "seek_data": false, 00:13:39.939 "copy": true, 00:13:39.939 "nvme_iov_md": false 00:13:39.939 }, 00:13:39.939 "memory_domains": [ 00:13:39.939 { 00:13:39.939 "dma_device_id": "system", 00:13:39.939 "dma_device_type": 1 00:13:39.939 }, 00:13:39.939 { 00:13:39.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.939 "dma_device_type": 2 00:13:39.939 } 00:13:39.939 ], 00:13:39.939 "driver_specific": {} 00:13:39.939 }, 00:13:39.939 { 00:13:39.939 "name": "Passthru0", 00:13:39.939 "aliases": [ 00:13:39.939 "b6949748-f0f2-51c7-8965-d565d6d701d0" 00:13:39.939 ], 00:13:39.939 "product_name": "passthru", 00:13:39.939 "block_size": 512, 00:13:39.939 "num_blocks": 16384, 00:13:39.939 "uuid": "b6949748-f0f2-51c7-8965-d565d6d701d0", 00:13:39.939 "assigned_rate_limits": { 00:13:39.939 "rw_ios_per_sec": 0, 00:13:39.939 "rw_mbytes_per_sec": 0, 00:13:39.939 "r_mbytes_per_sec": 0, 00:13:39.939 "w_mbytes_per_sec": 0 00:13:39.939 }, 00:13:39.939 "claimed": false, 00:13:39.939 "zoned": false, 00:13:39.939 "supported_io_types": { 00:13:39.939 "read": true, 00:13:39.939 "write": true, 00:13:39.939 "unmap": true, 00:13:39.939 "flush": true, 00:13:39.939 "reset": true, 00:13:39.939 "nvme_admin": false, 00:13:39.939 "nvme_io": false, 00:13:39.939 "nvme_io_md": false, 00:13:39.939 "write_zeroes": true, 00:13:39.939 "zcopy": true, 00:13:39.939 "get_zone_info": false, 00:13:39.939 "zone_management": false, 00:13:39.939 "zone_append": false, 00:13:39.939 "compare": false, 00:13:39.939 "compare_and_write": false, 00:13:39.939 "abort": true, 00:13:39.939 "seek_hole": false, 00:13:39.939 "seek_data": false, 00:13:39.939 "copy": true, 00:13:39.939 "nvme_iov_md": false 00:13:39.939 }, 00:13:39.939 "memory_domains": [ 00:13:39.939 { 00:13:39.939 "dma_device_id": "system", 00:13:39.939 "dma_device_type": 1 00:13:39.939 }, 00:13:39.939 { 00:13:39.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.939 "dma_device_type": 2 00:13:39.939 } 00:13:39.939 ], 00:13:39.939 "driver_specific": { 00:13:39.939 "passthru": { 00:13:39.939 "name": "Passthru0", 00:13:39.939 "base_bdev_name": "Malloc2" 00:13:39.939 } 00:13:39.939 } 00:13:39.939 } 00:13:39.939 ]' 00:13:39.939 19:15:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:13:39.939 19:15:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:13:39.939 19:15:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:13:39.939 19:15:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.939 19:15:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:39.939 19:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.939 19:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:39.939 19:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.939 19:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:39.939 19:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.939 19:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:39.939 19:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.939 19:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:39.939 19:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.939 19:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:13:39.939 19:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:13:39.939 ************************************ 00:13:39.939 END TEST rpc_daemon_integrity 00:13:39.939 ************************************ 00:13:39.939 19:15:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:13:39.939 00:13:39.939 real 0m0.330s 00:13:39.939 user 0m0.221s 00:13:39.939 sys 0m0.040s 00:13:39.939 19:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:39.939 19:15:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:39.939 19:15:49 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:13:39.939 19:15:49 rpc -- rpc/rpc.sh@84 -- # killprocess 56857 00:13:39.939 19:15:49 rpc -- common/autotest_common.sh@950 -- # '[' -z 56857 ']' 00:13:39.939 19:15:49 rpc -- common/autotest_common.sh@954 -- # kill -0 56857 00:13:39.939 19:15:49 rpc -- common/autotest_common.sh@955 -- # uname 00:13:39.939 19:15:49 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:39.939 19:15:49 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56857 00:13:39.939 killing process with pid 56857 00:13:39.939 19:15:49 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:39.939 19:15:49 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:39.939 19:15:49 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56857' 00:13:39.939 19:15:49 rpc -- common/autotest_common.sh@969 -- # kill 56857 00:13:39.939 19:15:49 rpc -- common/autotest_common.sh@974 -- # wait 56857 00:13:40.505 ************************************ 00:13:40.505 END TEST rpc 00:13:40.505 ************************************ 00:13:40.505 00:13:40.505 real 0m2.483s 00:13:40.505 user 0m3.156s 00:13:40.505 sys 0m0.680s 00:13:40.505 19:15:49 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:40.505 19:15:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.505 19:15:49 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:13:40.505 19:15:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:40.505 19:15:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:40.505 19:15:49 -- common/autotest_common.sh@10 -- # set +x 00:13:40.505 ************************************ 00:13:40.505 START TEST skip_rpc 00:13:40.505 ************************************ 00:13:40.505 19:15:49 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:13:40.505 * Looking for test storage... 00:13:40.505 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:13:40.505 19:15:49 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:40.505 19:15:49 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:40.505 19:15:49 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:13:40.764 19:15:49 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:40.764 19:15:49 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:40.764 19:15:49 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:40.764 19:15:49 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:40.764 19:15:49 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:40.764 19:15:49 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:40.764 19:15:49 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:40.764 19:15:49 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:40.764 19:15:49 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:40.764 19:15:49 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:40.764 19:15:49 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:40.764 19:15:49 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:40.764 19:15:49 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:40.764 19:15:49 skip_rpc -- scripts/common.sh@345 -- # : 1 00:13:40.764 19:15:49 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:40.764 19:15:49 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:40.764 19:15:49 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:40.764 19:15:49 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:13:40.764 19:15:49 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:40.764 19:15:49 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:13:40.764 19:15:49 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:40.764 19:15:49 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:40.764 19:15:49 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:13:40.764 19:15:49 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:40.764 19:15:49 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:13:40.764 19:15:49 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:40.764 19:15:49 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:40.764 19:15:49 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:40.764 19:15:49 skip_rpc -- scripts/common.sh@368 -- # return 0 00:13:40.764 19:15:49 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:40.764 19:15:49 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:40.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.764 --rc genhtml_branch_coverage=1 00:13:40.764 --rc genhtml_function_coverage=1 00:13:40.764 --rc genhtml_legend=1 00:13:40.764 --rc geninfo_all_blocks=1 00:13:40.764 --rc geninfo_unexecuted_blocks=1 00:13:40.764 00:13:40.764 ' 00:13:40.764 19:15:49 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:40.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.764 --rc genhtml_branch_coverage=1 00:13:40.764 --rc genhtml_function_coverage=1 00:13:40.764 --rc genhtml_legend=1 00:13:40.764 --rc geninfo_all_blocks=1 00:13:40.764 --rc geninfo_unexecuted_blocks=1 00:13:40.764 00:13:40.764 ' 00:13:40.764 19:15:49 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:40.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.764 --rc genhtml_branch_coverage=1 00:13:40.765 --rc genhtml_function_coverage=1 00:13:40.765 --rc genhtml_legend=1 00:13:40.765 --rc geninfo_all_blocks=1 00:13:40.765 --rc geninfo_unexecuted_blocks=1 00:13:40.765 00:13:40.765 ' 00:13:40.765 19:15:49 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:40.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.765 --rc genhtml_branch_coverage=1 00:13:40.765 --rc genhtml_function_coverage=1 00:13:40.765 --rc genhtml_legend=1 00:13:40.765 --rc geninfo_all_blocks=1 00:13:40.765 --rc geninfo_unexecuted_blocks=1 00:13:40.765 00:13:40.765 ' 00:13:40.765 19:15:49 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:13:40.765 19:15:49 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:13:40.765 19:15:49 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:13:40.765 19:15:49 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:40.765 19:15:49 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:40.765 19:15:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.765 ************************************ 00:13:40.765 START TEST skip_rpc 00:13:40.765 ************************************ 00:13:40.765 19:15:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:13:40.765 19:15:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57056 00:13:40.765 19:15:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:13:40.765 19:15:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:13:40.765 19:15:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:13:40.765 [2024-10-17 19:15:49.895119] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:13:40.765 [2024-10-17 19:15:49.895575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57056 ] 00:13:41.023 [2024-10-17 19:15:50.036484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.023 [2024-10-17 19:15:50.101366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.023 [2024-10-17 19:15:50.169543] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:46.382 19:15:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:13:46.382 19:15:54 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:46.382 19:15:54 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:13:46.382 19:15:54 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:46.382 19:15:54 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:46.382 19:15:54 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:46.382 19:15:54 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:46.382 19:15:54 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:13:46.382 19:15:54 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.383 19:15:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.383 19:15:54 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:46.383 19:15:54 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:46.383 19:15:54 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:46.383 19:15:54 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:46.383 19:15:54 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:46.383 19:15:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:13:46.383 19:15:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57056 00:13:46.383 19:15:54 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57056 ']' 00:13:46.383 19:15:54 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57056 00:13:46.383 19:15:54 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:13:46.383 19:15:54 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:46.383 19:15:54 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57056 00:13:46.383 19:15:54 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:46.383 19:15:54 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:46.383 19:15:54 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57056' 00:13:46.383 killing process with pid 57056 00:13:46.383 19:15:54 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57056 00:13:46.383 19:15:54 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57056 00:13:46.383 00:13:46.383 real 0m5.429s 00:13:46.383 user 0m5.046s 00:13:46.383 sys 0m0.297s 00:13:46.383 19:15:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:46.383 ************************************ 00:13:46.383 END TEST skip_rpc 00:13:46.383 ************************************ 00:13:46.383 19:15:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.383 19:15:55 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:13:46.383 19:15:55 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:46.383 19:15:55 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:46.383 19:15:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.383 ************************************ 00:13:46.383 START TEST skip_rpc_with_json 00:13:46.383 ************************************ 00:13:46.383 19:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:13:46.383 19:15:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:13:46.383 19:15:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57142 00:13:46.383 19:15:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:13:46.383 19:15:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57142 00:13:46.383 19:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57142 ']' 00:13:46.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.383 19:15:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:46.383 19:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.383 19:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:46.383 19:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.383 19:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:46.383 19:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:13:46.383 [2024-10-17 19:15:55.368126] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:13:46.383 [2024-10-17 19:15:55.368269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57142 ] 00:13:46.383 [2024-10-17 19:15:55.508472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.383 [2024-10-17 19:15:55.572174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.642 [2024-10-17 19:15:55.639740] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:47.209 19:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:47.209 19:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:13:47.209 19:15:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:13:47.209 19:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.209 19:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:13:47.209 [2024-10-17 19:15:56.381780] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:13:47.209 request: 00:13:47.209 { 00:13:47.209 "trtype": "tcp", 00:13:47.209 "method": "nvmf_get_transports", 00:13:47.209 "req_id": 1 00:13:47.209 } 00:13:47.209 Got JSON-RPC error response 00:13:47.209 response: 00:13:47.209 { 00:13:47.209 "code": -19, 00:13:47.209 "message": "No such device" 00:13:47.209 } 00:13:47.209 19:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:47.209 19:15:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:13:47.209 19:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.209 19:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:13:47.209 [2024-10-17 19:15:56.393886] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.209 19:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.209 19:15:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:13:47.209 19:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.209 19:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:13:47.468 19:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.468 19:15:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:13:47.468 { 00:13:47.468 "subsystems": [ 00:13:47.468 { 00:13:47.468 "subsystem": "fsdev", 00:13:47.468 "config": [ 00:13:47.468 { 00:13:47.468 "method": "fsdev_set_opts", 00:13:47.468 "params": { 00:13:47.468 "fsdev_io_pool_size": 65535, 00:13:47.468 "fsdev_io_cache_size": 256 00:13:47.468 } 00:13:47.468 } 00:13:47.468 ] 00:13:47.468 }, 00:13:47.468 { 00:13:47.468 "subsystem": "keyring", 00:13:47.468 "config": [] 00:13:47.468 }, 00:13:47.468 { 00:13:47.468 "subsystem": "iobuf", 00:13:47.468 "config": [ 00:13:47.468 { 00:13:47.468 "method": "iobuf_set_options", 00:13:47.468 "params": { 00:13:47.468 "small_pool_count": 8192, 00:13:47.468 "large_pool_count": 1024, 00:13:47.468 "small_bufsize": 8192, 00:13:47.468 "large_bufsize": 135168 00:13:47.468 } 00:13:47.468 } 00:13:47.468 ] 00:13:47.468 }, 00:13:47.468 { 00:13:47.468 "subsystem": "sock", 00:13:47.468 "config": [ 00:13:47.468 { 00:13:47.468 "method": "sock_set_default_impl", 00:13:47.468 "params": { 00:13:47.468 "impl_name": "uring" 00:13:47.468 } 00:13:47.468 }, 00:13:47.468 { 00:13:47.468 "method": "sock_impl_set_options", 00:13:47.468 "params": { 00:13:47.468 "impl_name": "ssl", 00:13:47.468 "recv_buf_size": 4096, 00:13:47.468 "send_buf_size": 4096, 00:13:47.468 "enable_recv_pipe": true, 00:13:47.468 "enable_quickack": false, 00:13:47.468 "enable_placement_id": 0, 00:13:47.468 "enable_zerocopy_send_server": true, 00:13:47.468 "enable_zerocopy_send_client": false, 00:13:47.468 "zerocopy_threshold": 0, 00:13:47.468 "tls_version": 0, 00:13:47.468 "enable_ktls": false 00:13:47.468 } 00:13:47.468 }, 00:13:47.468 { 00:13:47.468 "method": "sock_impl_set_options", 00:13:47.468 "params": { 00:13:47.468 "impl_name": "posix", 00:13:47.468 "recv_buf_size": 2097152, 00:13:47.468 "send_buf_size": 2097152, 00:13:47.468 "enable_recv_pipe": true, 00:13:47.468 "enable_quickack": false, 00:13:47.468 "enable_placement_id": 0, 00:13:47.468 "enable_zerocopy_send_server": true, 00:13:47.468 "enable_zerocopy_send_client": false, 00:13:47.468 "zerocopy_threshold": 0, 00:13:47.468 "tls_version": 0, 00:13:47.468 "enable_ktls": false 00:13:47.468 } 00:13:47.468 }, 00:13:47.468 { 00:13:47.468 "method": "sock_impl_set_options", 00:13:47.468 "params": { 00:13:47.468 "impl_name": "uring", 00:13:47.468 "recv_buf_size": 2097152, 00:13:47.468 "send_buf_size": 2097152, 00:13:47.468 "enable_recv_pipe": true, 00:13:47.468 "enable_quickack": false, 00:13:47.468 "enable_placement_id": 0, 00:13:47.468 "enable_zerocopy_send_server": false, 00:13:47.468 "enable_zerocopy_send_client": false, 00:13:47.468 "zerocopy_threshold": 0, 00:13:47.468 "tls_version": 0, 00:13:47.468 "enable_ktls": false 00:13:47.468 } 00:13:47.468 } 00:13:47.468 ] 00:13:47.468 }, 00:13:47.468 { 00:13:47.468 "subsystem": "vmd", 00:13:47.468 "config": [] 00:13:47.468 }, 00:13:47.468 { 00:13:47.468 "subsystem": "accel", 00:13:47.468 "config": [ 00:13:47.468 { 00:13:47.468 "method": "accel_set_options", 00:13:47.468 "params": { 00:13:47.468 "small_cache_size": 128, 00:13:47.468 "large_cache_size": 16, 00:13:47.468 "task_count": 2048, 00:13:47.468 "sequence_count": 2048, 00:13:47.468 "buf_count": 2048 00:13:47.468 } 00:13:47.468 } 00:13:47.468 ] 00:13:47.468 }, 00:13:47.468 { 00:13:47.468 "subsystem": "bdev", 00:13:47.468 "config": [ 00:13:47.468 { 00:13:47.468 "method": "bdev_set_options", 00:13:47.468 "params": { 00:13:47.468 "bdev_io_pool_size": 65535, 00:13:47.468 "bdev_io_cache_size": 256, 00:13:47.468 "bdev_auto_examine": true, 00:13:47.469 "iobuf_small_cache_size": 128, 00:13:47.469 "iobuf_large_cache_size": 16 00:13:47.469 } 00:13:47.469 }, 00:13:47.469 { 00:13:47.469 "method": "bdev_raid_set_options", 00:13:47.469 "params": { 00:13:47.469 "process_window_size_kb": 1024, 00:13:47.469 "process_max_bandwidth_mb_sec": 0 00:13:47.469 } 00:13:47.469 }, 00:13:47.469 { 00:13:47.469 "method": "bdev_iscsi_set_options", 00:13:47.469 "params": { 00:13:47.469 "timeout_sec": 30 00:13:47.469 } 00:13:47.469 }, 00:13:47.469 { 00:13:47.469 "method": "bdev_nvme_set_options", 00:13:47.469 "params": { 00:13:47.469 "action_on_timeout": "none", 00:13:47.469 "timeout_us": 0, 00:13:47.469 "timeout_admin_us": 0, 00:13:47.469 "keep_alive_timeout_ms": 10000, 00:13:47.469 "arbitration_burst": 0, 00:13:47.469 "low_priority_weight": 0, 00:13:47.469 "medium_priority_weight": 0, 00:13:47.469 "high_priority_weight": 0, 00:13:47.469 "nvme_adminq_poll_period_us": 10000, 00:13:47.469 "nvme_ioq_poll_period_us": 0, 00:13:47.469 "io_queue_requests": 0, 00:13:47.469 "delay_cmd_submit": true, 00:13:47.469 "transport_retry_count": 4, 00:13:47.469 "bdev_retry_count": 3, 00:13:47.469 "transport_ack_timeout": 0, 00:13:47.469 "ctrlr_loss_timeout_sec": 0, 00:13:47.469 "reconnect_delay_sec": 0, 00:13:47.469 "fast_io_fail_timeout_sec": 0, 00:13:47.469 "disable_auto_failback": false, 00:13:47.469 "generate_uuids": false, 00:13:47.469 "transport_tos": 0, 00:13:47.469 "nvme_error_stat": false, 00:13:47.469 "rdma_srq_size": 0, 00:13:47.469 "io_path_stat": false, 00:13:47.469 "allow_accel_sequence": false, 00:13:47.469 "rdma_max_cq_size": 0, 00:13:47.469 "rdma_cm_event_timeout_ms": 0, 00:13:47.469 "dhchap_digests": [ 00:13:47.469 "sha256", 00:13:47.469 "sha384", 00:13:47.469 "sha512" 00:13:47.469 ], 00:13:47.469 "dhchap_dhgroups": [ 00:13:47.469 "null", 00:13:47.469 "ffdhe2048", 00:13:47.469 "ffdhe3072", 00:13:47.469 "ffdhe4096", 00:13:47.469 "ffdhe6144", 00:13:47.469 "ffdhe8192" 00:13:47.469 ] 00:13:47.469 } 00:13:47.469 }, 00:13:47.469 { 00:13:47.469 "method": "bdev_nvme_set_hotplug", 00:13:47.469 "params": { 00:13:47.469 "period_us": 100000, 00:13:47.469 "enable": false 00:13:47.469 } 00:13:47.469 }, 00:13:47.469 { 00:13:47.469 "method": "bdev_wait_for_examine" 00:13:47.469 } 00:13:47.469 ] 00:13:47.469 }, 00:13:47.469 { 00:13:47.469 "subsystem": "scsi", 00:13:47.469 "config": null 00:13:47.469 }, 00:13:47.469 { 00:13:47.469 "subsystem": "scheduler", 00:13:47.469 "config": [ 00:13:47.469 { 00:13:47.469 "method": "framework_set_scheduler", 00:13:47.469 "params": { 00:13:47.469 "name": "static" 00:13:47.469 } 00:13:47.469 } 00:13:47.469 ] 00:13:47.469 }, 00:13:47.469 { 00:13:47.469 "subsystem": "vhost_scsi", 00:13:47.469 "config": [] 00:13:47.469 }, 00:13:47.469 { 00:13:47.469 "subsystem": "vhost_blk", 00:13:47.469 "config": [] 00:13:47.469 }, 00:13:47.469 { 00:13:47.469 "subsystem": "ublk", 00:13:47.469 "config": [] 00:13:47.469 }, 00:13:47.469 { 00:13:47.469 "subsystem": "nbd", 00:13:47.469 "config": [] 00:13:47.469 }, 00:13:47.469 { 00:13:47.469 "subsystem": "nvmf", 00:13:47.469 "config": [ 00:13:47.469 { 00:13:47.469 "method": "nvmf_set_config", 00:13:47.469 "params": { 00:13:47.469 "discovery_filter": "match_any", 00:13:47.469 "admin_cmd_passthru": { 00:13:47.469 "identify_ctrlr": false 00:13:47.469 }, 00:13:47.469 "dhchap_digests": [ 00:13:47.469 "sha256", 00:13:47.469 "sha384", 00:13:47.469 "sha512" 00:13:47.469 ], 00:13:47.469 "dhchap_dhgroups": [ 00:13:47.469 "null", 00:13:47.469 "ffdhe2048", 00:13:47.469 "ffdhe3072", 00:13:47.469 "ffdhe4096", 00:13:47.469 "ffdhe6144", 00:13:47.469 "ffdhe8192" 00:13:47.469 ] 00:13:47.469 } 00:13:47.469 }, 00:13:47.469 { 00:13:47.469 "method": "nvmf_set_max_subsystems", 00:13:47.469 "params": { 00:13:47.469 "max_subsystems": 1024 00:13:47.469 } 00:13:47.469 }, 00:13:47.469 { 00:13:47.469 "method": "nvmf_set_crdt", 00:13:47.469 "params": { 00:13:47.469 "crdt1": 0, 00:13:47.469 "crdt2": 0, 00:13:47.469 "crdt3": 0 00:13:47.469 } 00:13:47.469 }, 00:13:47.469 { 00:13:47.469 "method": "nvmf_create_transport", 00:13:47.469 "params": { 00:13:47.469 "trtype": "TCP", 00:13:47.469 "max_queue_depth": 128, 00:13:47.469 "max_io_qpairs_per_ctrlr": 127, 00:13:47.469 "in_capsule_data_size": 4096, 00:13:47.469 "max_io_size": 131072, 00:13:47.469 "io_unit_size": 131072, 00:13:47.469 "max_aq_depth": 128, 00:13:47.469 "num_shared_buffers": 511, 00:13:47.469 "buf_cache_size": 4294967295, 00:13:47.469 "dif_insert_or_strip": false, 00:13:47.469 "zcopy": false, 00:13:47.469 "c2h_success": true, 00:13:47.469 "sock_priority": 0, 00:13:47.469 "abort_timeout_sec": 1, 00:13:47.469 "ack_timeout": 0, 00:13:47.469 "data_wr_pool_size": 0 00:13:47.469 } 00:13:47.469 } 00:13:47.469 ] 00:13:47.469 }, 00:13:47.469 { 00:13:47.469 "subsystem": "iscsi", 00:13:47.469 "config": [ 00:13:47.469 { 00:13:47.469 "method": "iscsi_set_options", 00:13:47.469 "params": { 00:13:47.469 "node_base": "iqn.2016-06.io.spdk", 00:13:47.469 "max_sessions": 128, 00:13:47.469 "max_connections_per_session": 2, 00:13:47.469 "max_queue_depth": 64, 00:13:47.469 "default_time2wait": 2, 00:13:47.469 "default_time2retain": 20, 00:13:47.469 "first_burst_length": 8192, 00:13:47.469 "immediate_data": true, 00:13:47.469 "allow_duplicated_isid": false, 00:13:47.469 "error_recovery_level": 0, 00:13:47.469 "nop_timeout": 60, 00:13:47.469 "nop_in_interval": 30, 00:13:47.469 "disable_chap": false, 00:13:47.469 "require_chap": false, 00:13:47.469 "mutual_chap": false, 00:13:47.469 "chap_group": 0, 00:13:47.469 "max_large_datain_per_connection": 64, 00:13:47.469 "max_r2t_per_connection": 4, 00:13:47.469 "pdu_pool_size": 36864, 00:13:47.469 "immediate_data_pool_size": 16384, 00:13:47.469 "data_out_pool_size": 2048 00:13:47.469 } 00:13:47.469 } 00:13:47.469 ] 00:13:47.469 } 00:13:47.469 ] 00:13:47.469 } 00:13:47.469 19:15:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:47.469 19:15:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57142 00:13:47.469 19:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57142 ']' 00:13:47.469 19:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57142 00:13:47.469 19:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:13:47.469 19:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:47.469 19:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57142 00:13:47.469 killing process with pid 57142 00:13:47.469 19:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:47.469 19:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:47.469 19:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57142' 00:13:47.469 19:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57142 00:13:47.469 19:15:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57142 00:13:48.051 19:15:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57170 00:13:48.051 19:15:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:13:48.051 19:15:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:13:53.321 19:16:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57170 00:13:53.321 19:16:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57170 ']' 00:13:53.321 19:16:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57170 00:13:53.321 19:16:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:13:53.321 19:16:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:53.321 19:16:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57170 00:13:53.321 killing process with pid 57170 00:13:53.321 19:16:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:53.321 19:16:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:53.321 19:16:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57170' 00:13:53.321 19:16:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57170 00:13:53.321 19:16:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57170 00:13:53.321 19:16:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:13:53.321 19:16:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:13:53.321 ************************************ 00:13:53.321 END TEST skip_rpc_with_json 00:13:53.321 ************************************ 00:13:53.321 00:13:53.321 real 0m7.112s 00:13:53.321 user 0m6.885s 00:13:53.321 sys 0m0.669s 00:13:53.321 19:16:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:53.321 19:16:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:13:53.321 19:16:02 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:13:53.321 19:16:02 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:53.321 19:16:02 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:53.321 19:16:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.321 ************************************ 00:13:53.321 START TEST skip_rpc_with_delay 00:13:53.321 ************************************ 00:13:53.321 19:16:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:13:53.321 19:16:02 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:13:53.321 19:16:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:13:53.321 19:16:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:13:53.321 19:16:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:53.321 19:16:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:53.321 19:16:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:53.321 19:16:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:53.321 19:16:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:53.321 19:16:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:53.321 19:16:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:53.321 19:16:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:13:53.321 19:16:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:13:53.321 [2024-10-17 19:16:02.532937] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:13:53.321 19:16:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:13:53.321 19:16:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:53.321 19:16:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:53.321 19:16:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:53.321 00:13:53.321 real 0m0.100s 00:13:53.321 user 0m0.070s 00:13:53.321 sys 0m0.028s 00:13:53.321 19:16:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:53.321 ************************************ 00:13:53.321 END TEST skip_rpc_with_delay 00:13:53.321 ************************************ 00:13:53.321 19:16:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:13:53.580 19:16:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:13:53.580 19:16:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:13:53.580 19:16:02 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:13:53.580 19:16:02 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:53.580 19:16:02 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:53.580 19:16:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.580 ************************************ 00:13:53.580 START TEST exit_on_failed_rpc_init 00:13:53.580 ************************************ 00:13:53.580 19:16:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:13:53.580 19:16:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57279 00:13:53.580 19:16:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57279 00:13:53.580 19:16:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:53.580 19:16:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57279 ']' 00:13:53.580 19:16:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.580 19:16:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:53.580 19:16:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.580 19:16:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:53.580 19:16:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:13:53.580 [2024-10-17 19:16:02.702310] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:13:53.580 [2024-10-17 19:16:02.702660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57279 ] 00:13:53.845 [2024-10-17 19:16:02.840800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.845 [2024-10-17 19:16:02.930971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.845 [2024-10-17 19:16:03.011284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:54.810 19:16:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:54.810 19:16:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:13:54.810 19:16:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:13:54.810 19:16:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:13:54.810 19:16:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:13:54.810 19:16:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:13:54.810 19:16:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:54.810 19:16:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:54.810 19:16:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:54.810 19:16:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:54.810 19:16:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:54.810 19:16:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:54.810 19:16:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:54.810 19:16:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:13:54.810 19:16:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:13:54.810 [2024-10-17 19:16:03.821217] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:13:54.810 [2024-10-17 19:16:03.821323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57297 ] 00:13:54.810 [2024-10-17 19:16:03.963426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.810 [2024-10-17 19:16:04.040453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.810 [2024-10-17 19:16:04.040760] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:13:54.810 [2024-10-17 19:16:04.040869] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:13:54.810 [2024-10-17 19:16:04.040943] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:55.068 19:16:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:13:55.068 19:16:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:55.068 19:16:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:13:55.068 19:16:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:13:55.068 19:16:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:13:55.068 19:16:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:55.069 19:16:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:55.069 19:16:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57279 00:13:55.069 19:16:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57279 ']' 00:13:55.069 19:16:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57279 00:13:55.069 19:16:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:13:55.069 19:16:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:55.069 19:16:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57279 00:13:55.069 killing process with pid 57279 00:13:55.069 19:16:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:55.069 19:16:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:55.069 19:16:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57279' 00:13:55.069 19:16:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57279 00:13:55.069 19:16:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57279 00:13:55.327 00:13:55.327 real 0m1.903s 00:13:55.327 user 0m2.270s 00:13:55.327 sys 0m0.430s 00:13:55.327 19:16:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:55.327 ************************************ 00:13:55.327 19:16:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:13:55.327 END TEST exit_on_failed_rpc_init 00:13:55.327 ************************************ 00:13:55.327 19:16:04 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:13:55.327 00:13:55.327 real 0m14.947s 00:13:55.327 user 0m14.456s 00:13:55.327 sys 0m1.637s 00:13:55.327 19:16:04 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:55.327 19:16:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.327 ************************************ 00:13:55.327 END TEST skip_rpc 00:13:55.327 ************************************ 00:13:55.586 19:16:04 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:13:55.586 19:16:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:55.586 19:16:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:55.586 19:16:04 -- common/autotest_common.sh@10 -- # set +x 00:13:55.586 ************************************ 00:13:55.586 START TEST rpc_client 00:13:55.586 ************************************ 00:13:55.586 19:16:04 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:13:55.586 * Looking for test storage... 00:13:55.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:13:55.586 19:16:04 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:55.586 19:16:04 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:13:55.586 19:16:04 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:55.586 19:16:04 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:55.586 19:16:04 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:55.586 19:16:04 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:55.586 19:16:04 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:55.586 19:16:04 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:13:55.586 19:16:04 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:13:55.586 19:16:04 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:13:55.586 19:16:04 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:13:55.586 19:16:04 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:13:55.586 19:16:04 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:13:55.586 19:16:04 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:13:55.586 19:16:04 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:55.586 19:16:04 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:13:55.586 19:16:04 rpc_client -- scripts/common.sh@345 -- # : 1 00:13:55.586 19:16:04 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:55.586 19:16:04 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:55.586 19:16:04 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:13:55.586 19:16:04 rpc_client -- scripts/common.sh@353 -- # local d=1 00:13:55.586 19:16:04 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:55.586 19:16:04 rpc_client -- scripts/common.sh@355 -- # echo 1 00:13:55.586 19:16:04 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:13:55.586 19:16:04 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:13:55.586 19:16:04 rpc_client -- scripts/common.sh@353 -- # local d=2 00:13:55.586 19:16:04 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:55.586 19:16:04 rpc_client -- scripts/common.sh@355 -- # echo 2 00:13:55.586 19:16:04 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:13:55.586 19:16:04 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:55.586 19:16:04 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:55.586 19:16:04 rpc_client -- scripts/common.sh@368 -- # return 0 00:13:55.586 19:16:04 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:55.587 19:16:04 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:55.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.587 --rc genhtml_branch_coverage=1 00:13:55.587 --rc genhtml_function_coverage=1 00:13:55.587 --rc genhtml_legend=1 00:13:55.587 --rc geninfo_all_blocks=1 00:13:55.587 --rc geninfo_unexecuted_blocks=1 00:13:55.587 00:13:55.587 ' 00:13:55.587 19:16:04 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:55.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.587 --rc genhtml_branch_coverage=1 00:13:55.587 --rc genhtml_function_coverage=1 00:13:55.587 --rc genhtml_legend=1 00:13:55.587 --rc geninfo_all_blocks=1 00:13:55.587 --rc geninfo_unexecuted_blocks=1 00:13:55.587 00:13:55.587 ' 00:13:55.587 19:16:04 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:55.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.587 --rc genhtml_branch_coverage=1 00:13:55.587 --rc genhtml_function_coverage=1 00:13:55.587 --rc genhtml_legend=1 00:13:55.587 --rc geninfo_all_blocks=1 00:13:55.587 --rc geninfo_unexecuted_blocks=1 00:13:55.587 00:13:55.587 ' 00:13:55.587 19:16:04 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:55.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.587 --rc genhtml_branch_coverage=1 00:13:55.587 --rc genhtml_function_coverage=1 00:13:55.587 --rc genhtml_legend=1 00:13:55.587 --rc geninfo_all_blocks=1 00:13:55.587 --rc geninfo_unexecuted_blocks=1 00:13:55.587 00:13:55.587 ' 00:13:55.587 19:16:04 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:13:55.587 OK 00:13:55.587 19:16:04 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:13:55.587 00:13:55.587 real 0m0.218s 00:13:55.587 user 0m0.131s 00:13:55.587 sys 0m0.098s 00:13:55.587 19:16:04 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:55.587 19:16:04 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:13:55.587 ************************************ 00:13:55.587 END TEST rpc_client 00:13:55.587 ************************************ 00:13:55.846 19:16:04 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:13:55.846 19:16:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:55.846 19:16:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:55.846 19:16:04 -- common/autotest_common.sh@10 -- # set +x 00:13:55.846 ************************************ 00:13:55.846 START TEST json_config 00:13:55.846 ************************************ 00:13:55.846 19:16:04 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:13:55.846 19:16:04 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:55.846 19:16:04 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:13:55.846 19:16:04 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:55.846 19:16:05 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:55.846 19:16:05 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:55.846 19:16:05 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:55.846 19:16:05 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:55.846 19:16:05 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:13:55.846 19:16:05 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:13:55.846 19:16:05 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:13:55.846 19:16:05 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:13:55.846 19:16:05 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:13:55.846 19:16:05 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:13:55.846 19:16:05 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:13:55.846 19:16:05 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:55.846 19:16:05 json_config -- scripts/common.sh@344 -- # case "$op" in 00:13:55.846 19:16:05 json_config -- scripts/common.sh@345 -- # : 1 00:13:55.846 19:16:05 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:55.846 19:16:05 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:55.846 19:16:05 json_config -- scripts/common.sh@365 -- # decimal 1 00:13:55.846 19:16:05 json_config -- scripts/common.sh@353 -- # local d=1 00:13:55.846 19:16:05 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:55.846 19:16:05 json_config -- scripts/common.sh@355 -- # echo 1 00:13:55.846 19:16:05 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:13:55.846 19:16:05 json_config -- scripts/common.sh@366 -- # decimal 2 00:13:55.846 19:16:05 json_config -- scripts/common.sh@353 -- # local d=2 00:13:55.846 19:16:05 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:55.846 19:16:05 json_config -- scripts/common.sh@355 -- # echo 2 00:13:55.846 19:16:05 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:13:55.846 19:16:05 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:55.846 19:16:05 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:55.846 19:16:05 json_config -- scripts/common.sh@368 -- # return 0 00:13:55.846 19:16:05 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:55.846 19:16:05 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:55.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.846 --rc genhtml_branch_coverage=1 00:13:55.846 --rc genhtml_function_coverage=1 00:13:55.846 --rc genhtml_legend=1 00:13:55.846 --rc geninfo_all_blocks=1 00:13:55.846 --rc geninfo_unexecuted_blocks=1 00:13:55.846 00:13:55.846 ' 00:13:55.846 19:16:05 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:55.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.846 --rc genhtml_branch_coverage=1 00:13:55.846 --rc genhtml_function_coverage=1 00:13:55.846 --rc genhtml_legend=1 00:13:55.846 --rc geninfo_all_blocks=1 00:13:55.846 --rc geninfo_unexecuted_blocks=1 00:13:55.846 00:13:55.846 ' 00:13:55.846 19:16:05 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:55.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.846 --rc genhtml_branch_coverage=1 00:13:55.846 --rc genhtml_function_coverage=1 00:13:55.846 --rc genhtml_legend=1 00:13:55.846 --rc geninfo_all_blocks=1 00:13:55.846 --rc geninfo_unexecuted_blocks=1 00:13:55.846 00:13:55.846 ' 00:13:55.846 19:16:05 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:55.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.846 --rc genhtml_branch_coverage=1 00:13:55.846 --rc genhtml_function_coverage=1 00:13:55.846 --rc genhtml_legend=1 00:13:55.846 --rc geninfo_all_blocks=1 00:13:55.846 --rc geninfo_unexecuted_blocks=1 00:13:55.846 00:13:55.846 ' 00:13:55.846 19:16:05 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:55.846 19:16:05 json_config -- nvmf/common.sh@7 -- # uname -s 00:13:55.846 19:16:05 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:55.846 19:16:05 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:55.846 19:16:05 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:55.846 19:16:05 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:55.846 19:16:05 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:55.846 19:16:05 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:55.846 19:16:05 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:55.846 19:16:05 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:55.846 19:16:05 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:55.846 19:16:05 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:55.846 19:16:05 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:13:55.846 19:16:05 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:13:55.846 19:16:05 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:55.846 19:16:05 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:55.846 19:16:05 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:55.846 19:16:05 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:55.846 19:16:05 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:55.846 19:16:05 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:13:55.846 19:16:05 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:55.846 19:16:05 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:55.846 19:16:05 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:55.846 19:16:05 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.846 19:16:05 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.846 19:16:05 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.846 19:16:05 json_config -- paths/export.sh@5 -- # export PATH 00:13:55.846 19:16:05 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.846 19:16:05 json_config -- nvmf/common.sh@51 -- # : 0 00:13:55.846 19:16:05 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:55.846 19:16:05 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:55.846 19:16:05 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:55.846 19:16:05 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:55.846 19:16:05 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:55.846 19:16:05 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:55.846 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:55.846 19:16:05 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:55.846 19:16:05 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:55.846 19:16:05 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:55.846 19:16:05 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:13:55.846 19:16:05 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:13:55.846 19:16:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:13:55.847 19:16:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:13:55.847 19:16:05 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:13:55.847 19:16:05 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:13:55.847 19:16:05 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:13:55.847 19:16:05 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:13:55.847 19:16:05 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:13:55.847 19:16:05 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:13:55.847 19:16:05 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:13:55.847 19:16:05 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:13:55.847 19:16:05 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:13:55.847 19:16:05 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:13:55.847 19:16:05 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:13:55.847 19:16:05 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:13:55.847 INFO: JSON configuration test init 00:13:55.847 19:16:05 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:13:55.847 19:16:05 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:13:55.847 19:16:05 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:55.847 19:16:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:55.847 19:16:05 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:13:55.847 19:16:05 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:55.847 19:16:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:55.847 19:16:05 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:13:55.847 19:16:05 json_config -- json_config/common.sh@9 -- # local app=target 00:13:55.847 19:16:05 json_config -- json_config/common.sh@10 -- # shift 00:13:55.847 19:16:05 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:13:55.847 19:16:05 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:13:55.847 19:16:05 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:13:55.847 19:16:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:13:55.847 Waiting for target to run... 00:13:55.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:13:55.847 19:16:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:13:55.847 19:16:05 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57437 00:13:55.847 19:16:05 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:13:55.847 19:16:05 json_config -- json_config/common.sh@25 -- # waitforlisten 57437 /var/tmp/spdk_tgt.sock 00:13:55.847 19:16:05 json_config -- common/autotest_common.sh@831 -- # '[' -z 57437 ']' 00:13:55.847 19:16:05 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:13:55.847 19:16:05 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:55.847 19:16:05 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:13:55.847 19:16:05 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:13:55.847 19:16:05 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:55.847 19:16:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:56.105 [2024-10-17 19:16:05.157433] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:13:56.105 [2024-10-17 19:16:05.157550] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57437 ] 00:13:56.365 [2024-10-17 19:16:05.576006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.623 [2024-10-17 19:16:05.640539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.190 00:13:57.190 19:16:06 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:57.190 19:16:06 json_config -- common/autotest_common.sh@864 -- # return 0 00:13:57.190 19:16:06 json_config -- json_config/common.sh@26 -- # echo '' 00:13:57.190 19:16:06 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:13:57.190 19:16:06 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:13:57.190 19:16:06 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:57.190 19:16:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:57.190 19:16:06 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:13:57.190 19:16:06 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:13:57.190 19:16:06 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:57.190 19:16:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:57.190 19:16:06 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:13:57.190 19:16:06 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:13:57.190 19:16:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:13:57.448 [2024-10-17 19:16:06.585246] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:57.706 19:16:06 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:13:57.707 19:16:06 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:13:57.707 19:16:06 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:57.707 19:16:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:57.707 19:16:06 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:13:57.707 19:16:06 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:13:57.707 19:16:06 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:13:57.707 19:16:06 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:13:57.707 19:16:06 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:13:57.707 19:16:06 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:13:57.707 19:16:06 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:13:57.707 19:16:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:13:57.964 19:16:07 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:13:57.964 19:16:07 json_config -- json_config/json_config.sh@51 -- # local get_types 00:13:57.964 19:16:07 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:13:57.964 19:16:07 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:13:57.964 19:16:07 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:13:57.964 19:16:07 json_config -- json_config/json_config.sh@54 -- # sort 00:13:57.965 19:16:07 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:13:57.965 19:16:07 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:13:57.965 19:16:07 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:13:57.965 19:16:07 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:13:57.965 19:16:07 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:57.965 19:16:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:57.965 19:16:07 json_config -- json_config/json_config.sh@62 -- # return 0 00:13:57.965 19:16:07 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:13:57.965 19:16:07 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:13:57.965 19:16:07 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:13:57.965 19:16:07 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:13:57.965 19:16:07 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:13:57.965 19:16:07 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:13:57.965 19:16:07 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:57.965 19:16:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:57.965 19:16:07 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:13:57.965 19:16:07 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:13:57.965 19:16:07 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:13:57.965 19:16:07 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:13:57.965 19:16:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:13:58.221 MallocForNvmf0 00:13:58.221 19:16:07 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:13:58.221 19:16:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:13:58.509 MallocForNvmf1 00:13:58.509 19:16:07 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:13:58.509 19:16:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:13:58.800 [2024-10-17 19:16:07.970160] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:58.800 19:16:07 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:58.800 19:16:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:59.058 19:16:08 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:13:59.058 19:16:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:13:59.317 19:16:08 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:13:59.317 19:16:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:13:59.575 19:16:08 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:13:59.575 19:16:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:13:59.833 [2024-10-17 19:16:09.030831] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:13:59.833 19:16:09 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:13:59.833 19:16:09 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:59.833 19:16:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:00.091 19:16:09 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:14:00.091 19:16:09 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:00.091 19:16:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:00.091 19:16:09 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:14:00.091 19:16:09 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:14:00.092 19:16:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:14:00.350 MallocBdevForConfigChangeCheck 00:14:00.350 19:16:09 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:14:00.350 19:16:09 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:00.350 19:16:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:00.350 19:16:09 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:14:00.350 19:16:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:14:00.918 INFO: shutting down applications... 00:14:00.918 19:16:09 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:14:00.918 19:16:09 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:14:00.918 19:16:09 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:14:00.918 19:16:09 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:14:00.918 19:16:09 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:14:01.192 Calling clear_iscsi_subsystem 00:14:01.192 Calling clear_nvmf_subsystem 00:14:01.192 Calling clear_nbd_subsystem 00:14:01.192 Calling clear_ublk_subsystem 00:14:01.192 Calling clear_vhost_blk_subsystem 00:14:01.192 Calling clear_vhost_scsi_subsystem 00:14:01.193 Calling clear_bdev_subsystem 00:14:01.193 19:16:10 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:14:01.193 19:16:10 json_config -- json_config/json_config.sh@350 -- # count=100 00:14:01.193 19:16:10 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:14:01.193 19:16:10 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:14:01.193 19:16:10 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:14:01.193 19:16:10 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:14:01.455 19:16:10 json_config -- json_config/json_config.sh@352 -- # break 00:14:01.455 19:16:10 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:14:01.455 19:16:10 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:14:01.455 19:16:10 json_config -- json_config/common.sh@31 -- # local app=target 00:14:01.455 19:16:10 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:14:01.455 19:16:10 json_config -- json_config/common.sh@35 -- # [[ -n 57437 ]] 00:14:01.455 19:16:10 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57437 00:14:01.455 19:16:10 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:14:01.455 19:16:10 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:01.455 19:16:10 json_config -- json_config/common.sh@41 -- # kill -0 57437 00:14:01.455 19:16:10 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:14:02.022 19:16:11 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:14:02.022 19:16:11 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:02.022 19:16:11 json_config -- json_config/common.sh@41 -- # kill -0 57437 00:14:02.022 SPDK target shutdown done 00:14:02.022 INFO: relaunching applications... 00:14:02.022 19:16:11 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:14:02.022 19:16:11 json_config -- json_config/common.sh@43 -- # break 00:14:02.022 19:16:11 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:14:02.022 19:16:11 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:14:02.022 19:16:11 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:14:02.022 19:16:11 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:14:02.022 19:16:11 json_config -- json_config/common.sh@9 -- # local app=target 00:14:02.022 19:16:11 json_config -- json_config/common.sh@10 -- # shift 00:14:02.022 19:16:11 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:14:02.022 19:16:11 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:14:02.022 19:16:11 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:14:02.022 Waiting for target to run... 00:14:02.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:14:02.022 19:16:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:02.022 19:16:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:02.022 19:16:11 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57638 00:14:02.022 19:16:11 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:14:02.022 19:16:11 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:14:02.022 19:16:11 json_config -- json_config/common.sh@25 -- # waitforlisten 57638 /var/tmp/spdk_tgt.sock 00:14:02.022 19:16:11 json_config -- common/autotest_common.sh@831 -- # '[' -z 57638 ']' 00:14:02.022 19:16:11 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:14:02.022 19:16:11 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:02.023 19:16:11 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:14:02.023 19:16:11 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:02.023 19:16:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:02.023 [2024-10-17 19:16:11.249659] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:14:02.023 [2024-10-17 19:16:11.250019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57638 ] 00:14:02.590 [2024-10-17 19:16:11.666678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.590 [2024-10-17 19:16:11.718715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.848 [2024-10-17 19:16:11.857802] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:02.848 [2024-10-17 19:16:12.078326] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.106 [2024-10-17 19:16:12.110440] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:14:03.106 19:16:12 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:03.106 19:16:12 json_config -- common/autotest_common.sh@864 -- # return 0 00:14:03.106 19:16:12 json_config -- json_config/common.sh@26 -- # echo '' 00:14:03.106 00:14:03.106 19:16:12 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:14:03.106 INFO: Checking if target configuration is the same... 00:14:03.106 19:16:12 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:14:03.106 19:16:12 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:14:03.106 19:16:12 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:14:03.106 19:16:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:14:03.106 + '[' 2 -ne 2 ']' 00:14:03.106 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:14:03.106 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:14:03.106 + rootdir=/home/vagrant/spdk_repo/spdk 00:14:03.106 +++ basename /dev/fd/62 00:14:03.106 ++ mktemp /tmp/62.XXX 00:14:03.106 + tmp_file_1=/tmp/62.fMq 00:14:03.106 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:14:03.106 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:14:03.106 + tmp_file_2=/tmp/spdk_tgt_config.json.8IQ 00:14:03.106 + ret=0 00:14:03.106 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:14:03.683 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:14:03.683 + diff -u /tmp/62.fMq /tmp/spdk_tgt_config.json.8IQ 00:14:03.683 INFO: JSON config files are the same 00:14:03.683 + echo 'INFO: JSON config files are the same' 00:14:03.683 + rm /tmp/62.fMq /tmp/spdk_tgt_config.json.8IQ 00:14:03.683 + exit 0 00:14:03.683 INFO: changing configuration and checking if this can be detected... 00:14:03.683 19:16:12 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:14:03.683 19:16:12 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:14:03.683 19:16:12 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:14:03.683 19:16:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:14:03.941 19:16:13 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:14:03.941 19:16:13 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:14:03.941 19:16:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:14:03.941 + '[' 2 -ne 2 ']' 00:14:03.941 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:14:03.941 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:14:03.941 + rootdir=/home/vagrant/spdk_repo/spdk 00:14:03.941 +++ basename /dev/fd/62 00:14:03.941 ++ mktemp /tmp/62.XXX 00:14:03.941 + tmp_file_1=/tmp/62.nrV 00:14:03.941 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:14:03.941 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:14:03.941 + tmp_file_2=/tmp/spdk_tgt_config.json.tPU 00:14:03.941 + ret=0 00:14:03.941 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:14:04.507 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:14:04.507 + diff -u /tmp/62.nrV /tmp/spdk_tgt_config.json.tPU 00:14:04.507 + ret=1 00:14:04.507 + echo '=== Start of file: /tmp/62.nrV ===' 00:14:04.507 + cat /tmp/62.nrV 00:14:04.507 + echo '=== End of file: /tmp/62.nrV ===' 00:14:04.507 + echo '' 00:14:04.507 + echo '=== Start of file: /tmp/spdk_tgt_config.json.tPU ===' 00:14:04.507 + cat /tmp/spdk_tgt_config.json.tPU 00:14:04.507 + echo '=== End of file: /tmp/spdk_tgt_config.json.tPU ===' 00:14:04.507 + echo '' 00:14:04.507 + rm /tmp/62.nrV /tmp/spdk_tgt_config.json.tPU 00:14:04.507 + exit 1 00:14:04.507 INFO: configuration change detected. 00:14:04.507 19:16:13 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:14:04.507 19:16:13 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:14:04.507 19:16:13 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:14:04.507 19:16:13 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:04.507 19:16:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:04.507 19:16:13 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:14:04.507 19:16:13 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:14:04.507 19:16:13 json_config -- json_config/json_config.sh@324 -- # [[ -n 57638 ]] 00:14:04.507 19:16:13 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:14:04.507 19:16:13 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:14:04.507 19:16:13 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:04.507 19:16:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:04.507 19:16:13 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:14:04.507 19:16:13 json_config -- json_config/json_config.sh@200 -- # uname -s 00:14:04.507 19:16:13 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:14:04.507 19:16:13 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:14:04.507 19:16:13 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:14:04.507 19:16:13 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:14:04.507 19:16:13 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:04.507 19:16:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:04.507 19:16:13 json_config -- json_config/json_config.sh@330 -- # killprocess 57638 00:14:04.507 19:16:13 json_config -- common/autotest_common.sh@950 -- # '[' -z 57638 ']' 00:14:04.507 19:16:13 json_config -- common/autotest_common.sh@954 -- # kill -0 57638 00:14:04.507 19:16:13 json_config -- common/autotest_common.sh@955 -- # uname 00:14:04.507 19:16:13 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:04.507 19:16:13 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57638 00:14:04.507 killing process with pid 57638 00:14:04.508 19:16:13 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:04.508 19:16:13 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:04.508 19:16:13 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57638' 00:14:04.508 19:16:13 json_config -- common/autotest_common.sh@969 -- # kill 57638 00:14:04.508 19:16:13 json_config -- common/autotest_common.sh@974 -- # wait 57638 00:14:04.766 19:16:13 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:14:04.766 19:16:13 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:14:04.766 19:16:13 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:04.766 19:16:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:04.766 INFO: Success 00:14:04.766 19:16:14 json_config -- json_config/json_config.sh@335 -- # return 0 00:14:04.766 19:16:14 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:14:04.766 00:14:04.766 real 0m9.140s 00:14:04.766 user 0m13.191s 00:14:04.766 sys 0m1.894s 00:14:04.766 ************************************ 00:14:04.766 END TEST json_config 00:14:04.766 ************************************ 00:14:04.766 19:16:14 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:04.766 19:16:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:05.024 19:16:14 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:14:05.024 19:16:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:05.024 19:16:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:05.024 19:16:14 -- common/autotest_common.sh@10 -- # set +x 00:14:05.024 ************************************ 00:14:05.024 START TEST json_config_extra_key 00:14:05.024 ************************************ 00:14:05.024 19:16:14 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:14:05.024 19:16:14 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:05.024 19:16:14 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:14:05.024 19:16:14 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:05.025 19:16:14 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:14:05.025 19:16:14 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:05.025 19:16:14 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:05.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.025 --rc genhtml_branch_coverage=1 00:14:05.025 --rc genhtml_function_coverage=1 00:14:05.025 --rc genhtml_legend=1 00:14:05.025 --rc geninfo_all_blocks=1 00:14:05.025 --rc geninfo_unexecuted_blocks=1 00:14:05.025 00:14:05.025 ' 00:14:05.025 19:16:14 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:05.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.025 --rc genhtml_branch_coverage=1 00:14:05.025 --rc genhtml_function_coverage=1 00:14:05.025 --rc genhtml_legend=1 00:14:05.025 --rc geninfo_all_blocks=1 00:14:05.025 --rc geninfo_unexecuted_blocks=1 00:14:05.025 00:14:05.025 ' 00:14:05.025 19:16:14 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:05.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.025 --rc genhtml_branch_coverage=1 00:14:05.025 --rc genhtml_function_coverage=1 00:14:05.025 --rc genhtml_legend=1 00:14:05.025 --rc geninfo_all_blocks=1 00:14:05.025 --rc geninfo_unexecuted_blocks=1 00:14:05.025 00:14:05.025 ' 00:14:05.025 19:16:14 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:05.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.025 --rc genhtml_branch_coverage=1 00:14:05.025 --rc genhtml_function_coverage=1 00:14:05.025 --rc genhtml_legend=1 00:14:05.025 --rc geninfo_all_blocks=1 00:14:05.025 --rc geninfo_unexecuted_blocks=1 00:14:05.025 00:14:05.025 ' 00:14:05.025 19:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:05.025 19:16:14 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:14:05.025 19:16:14 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:05.025 19:16:14 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:05.025 19:16:14 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:05.025 19:16:14 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:05.025 19:16:14 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:05.025 19:16:14 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:05.025 19:16:14 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:05.025 19:16:14 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:05.025 19:16:14 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:05.025 19:16:14 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:05.025 19:16:14 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:14:05.025 19:16:14 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:14:05.025 19:16:14 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:05.025 19:16:14 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:05.025 19:16:14 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:05.025 19:16:14 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:05.025 19:16:14 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.025 19:16:14 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.025 19:16:14 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.025 19:16:14 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.025 19:16:14 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.025 19:16:14 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:14:05.025 19:16:14 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.025 19:16:14 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:14:05.025 19:16:14 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:05.025 19:16:14 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:05.025 19:16:14 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:05.025 19:16:14 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:05.025 19:16:14 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:05.025 19:16:14 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:05.025 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:05.025 19:16:14 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:05.025 19:16:14 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:05.025 19:16:14 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:05.025 19:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:14:05.025 19:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:14:05.025 19:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:14:05.025 19:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:14:05.025 19:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:14:05.025 19:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:14:05.025 19:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:14:05.025 19:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:14:05.025 19:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:14:05.025 19:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:14:05.025 19:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:14:05.025 INFO: launching applications... 00:14:05.025 19:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:14:05.025 19:16:14 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:14:05.025 19:16:14 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:14:05.025 19:16:14 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:14:05.025 19:16:14 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:14:05.025 19:16:14 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:14:05.026 19:16:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:05.026 Waiting for target to run... 00:14:05.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:14:05.026 19:16:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:05.026 19:16:14 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57792 00:14:05.026 19:16:14 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:14:05.026 19:16:14 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57792 /var/tmp/spdk_tgt.sock 00:14:05.026 19:16:14 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:14:05.026 19:16:14 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 57792 ']' 00:14:05.026 19:16:14 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:14:05.026 19:16:14 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:05.026 19:16:14 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:14:05.026 19:16:14 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:05.026 19:16:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:14:05.284 [2024-10-17 19:16:14.336691] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:14:05.284 [2024-10-17 19:16:14.336798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57792 ] 00:14:05.543 [2024-10-17 19:16:14.753221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.801 [2024-10-17 19:16:14.819875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.801 [2024-10-17 19:16:14.858136] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:06.430 00:14:06.430 INFO: shutting down applications... 00:14:06.430 19:16:15 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:06.430 19:16:15 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:14:06.430 19:16:15 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:14:06.430 19:16:15 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:14:06.430 19:16:15 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:14:06.430 19:16:15 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:14:06.430 19:16:15 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:14:06.430 19:16:15 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57792 ]] 00:14:06.430 19:16:15 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57792 00:14:06.430 19:16:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:14:06.430 19:16:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:06.430 19:16:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57792 00:14:06.430 19:16:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:14:06.688 19:16:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:14:06.688 19:16:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:06.688 19:16:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57792 00:14:06.688 19:16:15 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:14:06.688 19:16:15 json_config_extra_key -- json_config/common.sh@43 -- # break 00:14:06.688 19:16:15 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:14:06.688 19:16:15 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:14:06.688 SPDK target shutdown done 00:14:06.688 19:16:15 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:14:06.688 Success 00:14:06.688 00:14:06.688 real 0m1.801s 00:14:06.688 user 0m1.692s 00:14:06.688 sys 0m0.479s 00:14:06.688 19:16:15 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:06.688 19:16:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:14:06.688 ************************************ 00:14:06.688 END TEST json_config_extra_key 00:14:06.688 ************************************ 00:14:06.688 19:16:15 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:14:06.688 19:16:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:06.688 19:16:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:06.688 19:16:15 -- common/autotest_common.sh@10 -- # set +x 00:14:06.688 ************************************ 00:14:06.688 START TEST alias_rpc 00:14:06.688 ************************************ 00:14:06.688 19:16:15 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:14:06.948 * Looking for test storage... 00:14:06.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:14:06.948 19:16:16 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:06.948 19:16:16 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:06.948 19:16:16 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:14:06.948 19:16:16 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:06.948 19:16:16 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:06.948 19:16:16 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:06.948 19:16:16 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:06.948 19:16:16 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:06.948 19:16:16 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:06.948 19:16:16 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:06.948 19:16:16 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:06.948 19:16:16 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:06.948 19:16:16 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:06.948 19:16:16 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:06.948 19:16:16 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:06.948 19:16:16 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:06.948 19:16:16 alias_rpc -- scripts/common.sh@345 -- # : 1 00:14:06.948 19:16:16 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:06.948 19:16:16 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:06.948 19:16:16 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:06.948 19:16:16 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:14:06.948 19:16:16 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:06.948 19:16:16 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:14:06.948 19:16:16 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:06.948 19:16:16 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:06.948 19:16:16 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:14:06.948 19:16:16 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:06.948 19:16:16 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:14:06.948 19:16:16 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:06.948 19:16:16 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:06.948 19:16:16 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:06.948 19:16:16 alias_rpc -- scripts/common.sh@368 -- # return 0 00:14:06.948 19:16:16 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:06.948 19:16:16 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:06.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.948 --rc genhtml_branch_coverage=1 00:14:06.948 --rc genhtml_function_coverage=1 00:14:06.948 --rc genhtml_legend=1 00:14:06.948 --rc geninfo_all_blocks=1 00:14:06.948 --rc geninfo_unexecuted_blocks=1 00:14:06.948 00:14:06.948 ' 00:14:06.948 19:16:16 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:06.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.948 --rc genhtml_branch_coverage=1 00:14:06.948 --rc genhtml_function_coverage=1 00:14:06.948 --rc genhtml_legend=1 00:14:06.948 --rc geninfo_all_blocks=1 00:14:06.948 --rc geninfo_unexecuted_blocks=1 00:14:06.948 00:14:06.948 ' 00:14:06.948 19:16:16 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:06.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.948 --rc genhtml_branch_coverage=1 00:14:06.948 --rc genhtml_function_coverage=1 00:14:06.948 --rc genhtml_legend=1 00:14:06.948 --rc geninfo_all_blocks=1 00:14:06.948 --rc geninfo_unexecuted_blocks=1 00:14:06.948 00:14:06.948 ' 00:14:06.948 19:16:16 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:06.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.948 --rc genhtml_branch_coverage=1 00:14:06.948 --rc genhtml_function_coverage=1 00:14:06.948 --rc genhtml_legend=1 00:14:06.948 --rc geninfo_all_blocks=1 00:14:06.948 --rc geninfo_unexecuted_blocks=1 00:14:06.948 00:14:06.948 ' 00:14:06.948 19:16:16 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:14:06.948 19:16:16 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57864 00:14:06.948 19:16:16 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57864 00:14:06.948 19:16:16 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 57864 ']' 00:14:06.948 19:16:16 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.948 19:16:16 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:06.948 19:16:16 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:06.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.949 19:16:16 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.949 19:16:16 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:06.949 19:16:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.949 [2024-10-17 19:16:16.178988] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:14:06.949 [2024-10-17 19:16:16.179116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57864 ] 00:14:07.207 [2024-10-17 19:16:16.318599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.207 [2024-10-17 19:16:16.386355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.465 [2024-10-17 19:16:16.466912] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:07.465 19:16:16 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:07.465 19:16:16 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:14:07.466 19:16:16 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:14:08.032 19:16:17 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57864 00:14:08.032 19:16:17 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 57864 ']' 00:14:08.032 19:16:17 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 57864 00:14:08.032 19:16:17 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:14:08.032 19:16:17 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:08.032 19:16:17 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57864 00:14:08.032 19:16:17 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:08.032 killing process with pid 57864 00:14:08.032 19:16:17 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:08.032 19:16:17 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57864' 00:14:08.032 19:16:17 alias_rpc -- common/autotest_common.sh@969 -- # kill 57864 00:14:08.032 19:16:17 alias_rpc -- common/autotest_common.sh@974 -- # wait 57864 00:14:08.291 00:14:08.291 real 0m1.549s 00:14:08.291 user 0m1.666s 00:14:08.291 sys 0m0.462s 00:14:08.291 19:16:17 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:08.291 ************************************ 00:14:08.291 END TEST alias_rpc 00:14:08.291 ************************************ 00:14:08.291 19:16:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.291 19:16:17 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:14:08.291 19:16:17 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:14:08.291 19:16:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:08.291 19:16:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:08.291 19:16:17 -- common/autotest_common.sh@10 -- # set +x 00:14:08.291 ************************************ 00:14:08.291 START TEST spdkcli_tcp 00:14:08.291 ************************************ 00:14:08.291 19:16:17 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:14:08.550 * Looking for test storage... 00:14:08.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:14:08.550 19:16:17 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:08.550 19:16:17 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:08.550 19:16:17 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:14:08.550 19:16:17 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:08.550 19:16:17 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:08.550 19:16:17 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:08.550 19:16:17 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:08.550 19:16:17 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:14:08.550 19:16:17 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:14:08.550 19:16:17 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:14:08.550 19:16:17 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:14:08.550 19:16:17 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:14:08.550 19:16:17 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:14:08.550 19:16:17 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:14:08.550 19:16:17 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:08.550 19:16:17 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:14:08.550 19:16:17 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:14:08.550 19:16:17 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:08.550 19:16:17 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:08.550 19:16:17 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:14:08.550 19:16:17 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:14:08.550 19:16:17 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:08.550 19:16:17 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:14:08.550 19:16:17 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:14:08.550 19:16:17 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:14:08.550 19:16:17 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:14:08.550 19:16:17 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:08.550 19:16:17 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:14:08.550 19:16:17 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:14:08.550 19:16:17 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:08.550 19:16:17 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:08.550 19:16:17 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:14:08.550 19:16:17 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:08.550 19:16:17 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:08.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.550 --rc genhtml_branch_coverage=1 00:14:08.550 --rc genhtml_function_coverage=1 00:14:08.550 --rc genhtml_legend=1 00:14:08.550 --rc geninfo_all_blocks=1 00:14:08.550 --rc geninfo_unexecuted_blocks=1 00:14:08.550 00:14:08.550 ' 00:14:08.550 19:16:17 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:08.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.550 --rc genhtml_branch_coverage=1 00:14:08.550 --rc genhtml_function_coverage=1 00:14:08.550 --rc genhtml_legend=1 00:14:08.550 --rc geninfo_all_blocks=1 00:14:08.550 --rc geninfo_unexecuted_blocks=1 00:14:08.550 00:14:08.550 ' 00:14:08.550 19:16:17 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:08.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.550 --rc genhtml_branch_coverage=1 00:14:08.550 --rc genhtml_function_coverage=1 00:14:08.550 --rc genhtml_legend=1 00:14:08.550 --rc geninfo_all_blocks=1 00:14:08.550 --rc geninfo_unexecuted_blocks=1 00:14:08.550 00:14:08.550 ' 00:14:08.550 19:16:17 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:08.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.550 --rc genhtml_branch_coverage=1 00:14:08.550 --rc genhtml_function_coverage=1 00:14:08.550 --rc genhtml_legend=1 00:14:08.550 --rc geninfo_all_blocks=1 00:14:08.550 --rc geninfo_unexecuted_blocks=1 00:14:08.550 00:14:08.550 ' 00:14:08.550 19:16:17 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:14:08.550 19:16:17 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:14:08.550 19:16:17 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:14:08.550 19:16:17 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:14:08.550 19:16:17 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:14:08.550 19:16:17 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:08.550 19:16:17 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:14:08.550 19:16:17 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:08.550 19:16:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:08.550 19:16:17 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57941 00:14:08.550 19:16:17 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:14:08.550 19:16:17 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57941 00:14:08.550 19:16:17 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 57941 ']' 00:14:08.550 19:16:17 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.550 19:16:17 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:08.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.550 19:16:17 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.550 19:16:17 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:08.550 19:16:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:08.550 [2024-10-17 19:16:17.769347] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:14:08.550 [2024-10-17 19:16:17.769465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57941 ] 00:14:08.809 [2024-10-17 19:16:17.903719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:08.809 [2024-10-17 19:16:17.974181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.809 [2024-10-17 19:16:17.974187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.809 [2024-10-17 19:16:18.052665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:09.107 19:16:18 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:09.107 19:16:18 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:14:09.107 19:16:18 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57950 00:14:09.107 19:16:18 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:14:09.107 19:16:18 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:14:09.365 [ 00:14:09.365 "bdev_malloc_delete", 00:14:09.365 "bdev_malloc_create", 00:14:09.365 "bdev_null_resize", 00:14:09.365 "bdev_null_delete", 00:14:09.365 "bdev_null_create", 00:14:09.365 "bdev_nvme_cuse_unregister", 00:14:09.365 "bdev_nvme_cuse_register", 00:14:09.365 "bdev_opal_new_user", 00:14:09.365 "bdev_opal_set_lock_state", 00:14:09.365 "bdev_opal_delete", 00:14:09.365 "bdev_opal_get_info", 00:14:09.365 "bdev_opal_create", 00:14:09.365 "bdev_nvme_opal_revert", 00:14:09.365 "bdev_nvme_opal_init", 00:14:09.365 "bdev_nvme_send_cmd", 00:14:09.365 "bdev_nvme_set_keys", 00:14:09.365 "bdev_nvme_get_path_iostat", 00:14:09.365 "bdev_nvme_get_mdns_discovery_info", 00:14:09.365 "bdev_nvme_stop_mdns_discovery", 00:14:09.365 "bdev_nvme_start_mdns_discovery", 00:14:09.365 "bdev_nvme_set_multipath_policy", 00:14:09.365 "bdev_nvme_set_preferred_path", 00:14:09.365 "bdev_nvme_get_io_paths", 00:14:09.365 "bdev_nvme_remove_error_injection", 00:14:09.365 "bdev_nvme_add_error_injection", 00:14:09.365 "bdev_nvme_get_discovery_info", 00:14:09.365 "bdev_nvme_stop_discovery", 00:14:09.365 "bdev_nvme_start_discovery", 00:14:09.365 "bdev_nvme_get_controller_health_info", 00:14:09.365 "bdev_nvme_disable_controller", 00:14:09.365 "bdev_nvme_enable_controller", 00:14:09.365 "bdev_nvme_reset_controller", 00:14:09.365 "bdev_nvme_get_transport_statistics", 00:14:09.365 "bdev_nvme_apply_firmware", 00:14:09.365 "bdev_nvme_detach_controller", 00:14:09.365 "bdev_nvme_get_controllers", 00:14:09.365 "bdev_nvme_attach_controller", 00:14:09.365 "bdev_nvme_set_hotplug", 00:14:09.365 "bdev_nvme_set_options", 00:14:09.365 "bdev_passthru_delete", 00:14:09.365 "bdev_passthru_create", 00:14:09.365 "bdev_lvol_set_parent_bdev", 00:14:09.365 "bdev_lvol_set_parent", 00:14:09.365 "bdev_lvol_check_shallow_copy", 00:14:09.365 "bdev_lvol_start_shallow_copy", 00:14:09.365 "bdev_lvol_grow_lvstore", 00:14:09.365 "bdev_lvol_get_lvols", 00:14:09.365 "bdev_lvol_get_lvstores", 00:14:09.365 "bdev_lvol_delete", 00:14:09.365 "bdev_lvol_set_read_only", 00:14:09.365 "bdev_lvol_resize", 00:14:09.365 "bdev_lvol_decouple_parent", 00:14:09.365 "bdev_lvol_inflate", 00:14:09.365 "bdev_lvol_rename", 00:14:09.365 "bdev_lvol_clone_bdev", 00:14:09.365 "bdev_lvol_clone", 00:14:09.365 "bdev_lvol_snapshot", 00:14:09.365 "bdev_lvol_create", 00:14:09.365 "bdev_lvol_delete_lvstore", 00:14:09.365 "bdev_lvol_rename_lvstore", 00:14:09.366 "bdev_lvol_create_lvstore", 00:14:09.366 "bdev_raid_set_options", 00:14:09.366 "bdev_raid_remove_base_bdev", 00:14:09.366 "bdev_raid_add_base_bdev", 00:14:09.366 "bdev_raid_delete", 00:14:09.366 "bdev_raid_create", 00:14:09.366 "bdev_raid_get_bdevs", 00:14:09.366 "bdev_error_inject_error", 00:14:09.366 "bdev_error_delete", 00:14:09.366 "bdev_error_create", 00:14:09.366 "bdev_split_delete", 00:14:09.366 "bdev_split_create", 00:14:09.366 "bdev_delay_delete", 00:14:09.366 "bdev_delay_create", 00:14:09.366 "bdev_delay_update_latency", 00:14:09.366 "bdev_zone_block_delete", 00:14:09.366 "bdev_zone_block_create", 00:14:09.366 "blobfs_create", 00:14:09.366 "blobfs_detect", 00:14:09.366 "blobfs_set_cache_size", 00:14:09.366 "bdev_aio_delete", 00:14:09.366 "bdev_aio_rescan", 00:14:09.366 "bdev_aio_create", 00:14:09.366 "bdev_ftl_set_property", 00:14:09.366 "bdev_ftl_get_properties", 00:14:09.366 "bdev_ftl_get_stats", 00:14:09.366 "bdev_ftl_unmap", 00:14:09.366 "bdev_ftl_unload", 00:14:09.366 "bdev_ftl_delete", 00:14:09.366 "bdev_ftl_load", 00:14:09.366 "bdev_ftl_create", 00:14:09.366 "bdev_virtio_attach_controller", 00:14:09.366 "bdev_virtio_scsi_get_devices", 00:14:09.366 "bdev_virtio_detach_controller", 00:14:09.366 "bdev_virtio_blk_set_hotplug", 00:14:09.366 "bdev_iscsi_delete", 00:14:09.366 "bdev_iscsi_create", 00:14:09.366 "bdev_iscsi_set_options", 00:14:09.366 "bdev_uring_delete", 00:14:09.366 "bdev_uring_rescan", 00:14:09.366 "bdev_uring_create", 00:14:09.366 "accel_error_inject_error", 00:14:09.366 "ioat_scan_accel_module", 00:14:09.366 "dsa_scan_accel_module", 00:14:09.366 "iaa_scan_accel_module", 00:14:09.366 "keyring_file_remove_key", 00:14:09.366 "keyring_file_add_key", 00:14:09.366 "keyring_linux_set_options", 00:14:09.366 "fsdev_aio_delete", 00:14:09.366 "fsdev_aio_create", 00:14:09.366 "iscsi_get_histogram", 00:14:09.366 "iscsi_enable_histogram", 00:14:09.366 "iscsi_set_options", 00:14:09.366 "iscsi_get_auth_groups", 00:14:09.366 "iscsi_auth_group_remove_secret", 00:14:09.366 "iscsi_auth_group_add_secret", 00:14:09.366 "iscsi_delete_auth_group", 00:14:09.366 "iscsi_create_auth_group", 00:14:09.366 "iscsi_set_discovery_auth", 00:14:09.366 "iscsi_get_options", 00:14:09.366 "iscsi_target_node_request_logout", 00:14:09.366 "iscsi_target_node_set_redirect", 00:14:09.366 "iscsi_target_node_set_auth", 00:14:09.366 "iscsi_target_node_add_lun", 00:14:09.366 "iscsi_get_stats", 00:14:09.366 "iscsi_get_connections", 00:14:09.366 "iscsi_portal_group_set_auth", 00:14:09.366 "iscsi_start_portal_group", 00:14:09.366 "iscsi_delete_portal_group", 00:14:09.366 "iscsi_create_portal_group", 00:14:09.366 "iscsi_get_portal_groups", 00:14:09.366 "iscsi_delete_target_node", 00:14:09.366 "iscsi_target_node_remove_pg_ig_maps", 00:14:09.366 "iscsi_target_node_add_pg_ig_maps", 00:14:09.366 "iscsi_create_target_node", 00:14:09.366 "iscsi_get_target_nodes", 00:14:09.366 "iscsi_delete_initiator_group", 00:14:09.366 "iscsi_initiator_group_remove_initiators", 00:14:09.366 "iscsi_initiator_group_add_initiators", 00:14:09.366 "iscsi_create_initiator_group", 00:14:09.366 "iscsi_get_initiator_groups", 00:14:09.366 "nvmf_set_crdt", 00:14:09.366 "nvmf_set_config", 00:14:09.366 "nvmf_set_max_subsystems", 00:14:09.366 "nvmf_stop_mdns_prr", 00:14:09.366 "nvmf_publish_mdns_prr", 00:14:09.366 "nvmf_subsystem_get_listeners", 00:14:09.366 "nvmf_subsystem_get_qpairs", 00:14:09.366 "nvmf_subsystem_get_controllers", 00:14:09.366 "nvmf_get_stats", 00:14:09.366 "nvmf_get_transports", 00:14:09.366 "nvmf_create_transport", 00:14:09.366 "nvmf_get_targets", 00:14:09.366 "nvmf_delete_target", 00:14:09.366 "nvmf_create_target", 00:14:09.366 "nvmf_subsystem_allow_any_host", 00:14:09.366 "nvmf_subsystem_set_keys", 00:14:09.366 "nvmf_subsystem_remove_host", 00:14:09.366 "nvmf_subsystem_add_host", 00:14:09.366 "nvmf_ns_remove_host", 00:14:09.366 "nvmf_ns_add_host", 00:14:09.366 "nvmf_subsystem_remove_ns", 00:14:09.366 "nvmf_subsystem_set_ns_ana_group", 00:14:09.366 "nvmf_subsystem_add_ns", 00:14:09.366 "nvmf_subsystem_listener_set_ana_state", 00:14:09.366 "nvmf_discovery_get_referrals", 00:14:09.366 "nvmf_discovery_remove_referral", 00:14:09.366 "nvmf_discovery_add_referral", 00:14:09.366 "nvmf_subsystem_remove_listener", 00:14:09.366 "nvmf_subsystem_add_listener", 00:14:09.366 "nvmf_delete_subsystem", 00:14:09.366 "nvmf_create_subsystem", 00:14:09.366 "nvmf_get_subsystems", 00:14:09.366 "env_dpdk_get_mem_stats", 00:14:09.366 "nbd_get_disks", 00:14:09.366 "nbd_stop_disk", 00:14:09.366 "nbd_start_disk", 00:14:09.366 "ublk_recover_disk", 00:14:09.366 "ublk_get_disks", 00:14:09.366 "ublk_stop_disk", 00:14:09.366 "ublk_start_disk", 00:14:09.366 "ublk_destroy_target", 00:14:09.366 "ublk_create_target", 00:14:09.366 "virtio_blk_create_transport", 00:14:09.366 "virtio_blk_get_transports", 00:14:09.366 "vhost_controller_set_coalescing", 00:14:09.366 "vhost_get_controllers", 00:14:09.366 "vhost_delete_controller", 00:14:09.366 "vhost_create_blk_controller", 00:14:09.366 "vhost_scsi_controller_remove_target", 00:14:09.366 "vhost_scsi_controller_add_target", 00:14:09.366 "vhost_start_scsi_controller", 00:14:09.366 "vhost_create_scsi_controller", 00:14:09.366 "thread_set_cpumask", 00:14:09.366 "scheduler_set_options", 00:14:09.366 "framework_get_governor", 00:14:09.366 "framework_get_scheduler", 00:14:09.366 "framework_set_scheduler", 00:14:09.366 "framework_get_reactors", 00:14:09.366 "thread_get_io_channels", 00:14:09.366 "thread_get_pollers", 00:14:09.366 "thread_get_stats", 00:14:09.366 "framework_monitor_context_switch", 00:14:09.366 "spdk_kill_instance", 00:14:09.366 "log_enable_timestamps", 00:14:09.366 "log_get_flags", 00:14:09.366 "log_clear_flag", 00:14:09.366 "log_set_flag", 00:14:09.366 "log_get_level", 00:14:09.366 "log_set_level", 00:14:09.366 "log_get_print_level", 00:14:09.366 "log_set_print_level", 00:14:09.366 "framework_enable_cpumask_locks", 00:14:09.366 "framework_disable_cpumask_locks", 00:14:09.366 "framework_wait_init", 00:14:09.366 "framework_start_init", 00:14:09.366 "scsi_get_devices", 00:14:09.366 "bdev_get_histogram", 00:14:09.366 "bdev_enable_histogram", 00:14:09.366 "bdev_set_qos_limit", 00:14:09.366 "bdev_set_qd_sampling_period", 00:14:09.366 "bdev_get_bdevs", 00:14:09.366 "bdev_reset_iostat", 00:14:09.366 "bdev_get_iostat", 00:14:09.366 "bdev_examine", 00:14:09.366 "bdev_wait_for_examine", 00:14:09.366 "bdev_set_options", 00:14:09.366 "accel_get_stats", 00:14:09.366 "accel_set_options", 00:14:09.366 "accel_set_driver", 00:14:09.366 "accel_crypto_key_destroy", 00:14:09.366 "accel_crypto_keys_get", 00:14:09.366 "accel_crypto_key_create", 00:14:09.366 "accel_assign_opc", 00:14:09.366 "accel_get_module_info", 00:14:09.366 "accel_get_opc_assignments", 00:14:09.366 "vmd_rescan", 00:14:09.366 "vmd_remove_device", 00:14:09.366 "vmd_enable", 00:14:09.366 "sock_get_default_impl", 00:14:09.366 "sock_set_default_impl", 00:14:09.366 "sock_impl_set_options", 00:14:09.366 "sock_impl_get_options", 00:14:09.366 "iobuf_get_stats", 00:14:09.366 "iobuf_set_options", 00:14:09.366 "keyring_get_keys", 00:14:09.366 "framework_get_pci_devices", 00:14:09.366 "framework_get_config", 00:14:09.366 "framework_get_subsystems", 00:14:09.366 "fsdev_set_opts", 00:14:09.366 "fsdev_get_opts", 00:14:09.366 "trace_get_info", 00:14:09.366 "trace_get_tpoint_group_mask", 00:14:09.366 "trace_disable_tpoint_group", 00:14:09.366 "trace_enable_tpoint_group", 00:14:09.366 "trace_clear_tpoint_mask", 00:14:09.366 "trace_set_tpoint_mask", 00:14:09.366 "notify_get_notifications", 00:14:09.366 "notify_get_types", 00:14:09.366 "spdk_get_version", 00:14:09.366 "rpc_get_methods" 00:14:09.366 ] 00:14:09.366 19:16:18 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:14:09.366 19:16:18 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:09.366 19:16:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:09.366 19:16:18 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:09.366 19:16:18 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57941 00:14:09.366 19:16:18 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 57941 ']' 00:14:09.366 19:16:18 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 57941 00:14:09.366 19:16:18 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:14:09.366 19:16:18 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:09.366 19:16:18 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57941 00:14:09.625 19:16:18 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:09.625 19:16:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:09.625 19:16:18 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57941' 00:14:09.625 killing process with pid 57941 00:14:09.625 19:16:18 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 57941 00:14:09.625 19:16:18 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 57941 00:14:09.884 00:14:09.884 real 0m1.527s 00:14:09.884 user 0m2.610s 00:14:09.884 sys 0m0.479s 00:14:09.884 19:16:19 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:09.884 19:16:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:09.884 ************************************ 00:14:09.884 END TEST spdkcli_tcp 00:14:09.884 ************************************ 00:14:09.884 19:16:19 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:14:09.884 19:16:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:09.884 19:16:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:09.884 19:16:19 -- common/autotest_common.sh@10 -- # set +x 00:14:09.884 ************************************ 00:14:09.884 START TEST dpdk_mem_utility 00:14:09.884 ************************************ 00:14:09.884 19:16:19 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:14:10.143 * Looking for test storage... 00:14:10.143 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:14:10.143 19:16:19 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:10.143 19:16:19 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:14:10.143 19:16:19 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:10.143 19:16:19 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:10.143 19:16:19 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:10.143 19:16:19 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:10.143 19:16:19 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:10.143 19:16:19 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:14:10.143 19:16:19 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:14:10.143 19:16:19 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:14:10.143 19:16:19 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:14:10.143 19:16:19 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:14:10.143 19:16:19 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:14:10.143 19:16:19 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:14:10.143 19:16:19 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:10.143 19:16:19 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:14:10.143 19:16:19 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:14:10.143 19:16:19 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:10.143 19:16:19 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:10.143 19:16:19 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:14:10.143 19:16:19 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:14:10.143 19:16:19 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:10.143 19:16:19 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:14:10.143 19:16:19 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:14:10.143 19:16:19 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:14:10.143 19:16:19 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:14:10.143 19:16:19 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:10.143 19:16:19 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:14:10.143 19:16:19 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:14:10.143 19:16:19 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:10.143 19:16:19 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:10.143 19:16:19 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:14:10.143 19:16:19 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:10.143 19:16:19 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:10.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.143 --rc genhtml_branch_coverage=1 00:14:10.143 --rc genhtml_function_coverage=1 00:14:10.143 --rc genhtml_legend=1 00:14:10.143 --rc geninfo_all_blocks=1 00:14:10.143 --rc geninfo_unexecuted_blocks=1 00:14:10.143 00:14:10.143 ' 00:14:10.143 19:16:19 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:10.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.143 --rc genhtml_branch_coverage=1 00:14:10.143 --rc genhtml_function_coverage=1 00:14:10.143 --rc genhtml_legend=1 00:14:10.143 --rc geninfo_all_blocks=1 00:14:10.143 --rc geninfo_unexecuted_blocks=1 00:14:10.143 00:14:10.143 ' 00:14:10.143 19:16:19 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:10.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.143 --rc genhtml_branch_coverage=1 00:14:10.143 --rc genhtml_function_coverage=1 00:14:10.143 --rc genhtml_legend=1 00:14:10.143 --rc geninfo_all_blocks=1 00:14:10.143 --rc geninfo_unexecuted_blocks=1 00:14:10.143 00:14:10.143 ' 00:14:10.143 19:16:19 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:10.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.143 --rc genhtml_branch_coverage=1 00:14:10.143 --rc genhtml_function_coverage=1 00:14:10.143 --rc genhtml_legend=1 00:14:10.143 --rc geninfo_all_blocks=1 00:14:10.143 --rc geninfo_unexecuted_blocks=1 00:14:10.143 00:14:10.143 ' 00:14:10.143 19:16:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:14:10.143 19:16:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58032 00:14:10.143 19:16:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:10.143 19:16:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58032 00:14:10.143 19:16:19 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 58032 ']' 00:14:10.143 19:16:19 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.143 19:16:19 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:10.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.143 19:16:19 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.143 19:16:19 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:10.143 19:16:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:14:10.143 [2024-10-17 19:16:19.339291] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:14:10.143 [2024-10-17 19:16:19.339395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58032 ] 00:14:10.402 [2024-10-17 19:16:19.475596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.402 [2024-10-17 19:16:19.536673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.402 [2024-10-17 19:16:19.607857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:10.660 19:16:19 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:10.660 19:16:19 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:14:10.660 19:16:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:14:10.660 19:16:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:14:10.660 19:16:19 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.660 19:16:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:14:10.660 { 00:14:10.660 "filename": "/tmp/spdk_mem_dump.txt" 00:14:10.660 } 00:14:10.660 19:16:19 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.660 19:16:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:14:10.660 DPDK memory size 810.000000 MiB in 1 heap(s) 00:14:10.660 1 heaps totaling size 810.000000 MiB 00:14:10.660 size: 810.000000 MiB heap id: 0 00:14:10.660 end heaps---------- 00:14:10.660 9 mempools totaling size 595.772034 MiB 00:14:10.660 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:14:10.660 size: 158.602051 MiB name: PDU_data_out_Pool 00:14:10.660 size: 92.545471 MiB name: bdev_io_58032 00:14:10.660 size: 50.003479 MiB name: msgpool_58032 00:14:10.660 size: 36.509338 MiB name: fsdev_io_58032 00:14:10.660 size: 21.763794 MiB name: PDU_Pool 00:14:10.660 size: 19.513306 MiB name: SCSI_TASK_Pool 00:14:10.660 size: 4.133484 MiB name: evtpool_58032 00:14:10.660 size: 0.026123 MiB name: Session_Pool 00:14:10.660 end mempools------- 00:14:10.660 6 memzones totaling size 4.142822 MiB 00:14:10.660 size: 1.000366 MiB name: RG_ring_0_58032 00:14:10.660 size: 1.000366 MiB name: RG_ring_1_58032 00:14:10.660 size: 1.000366 MiB name: RG_ring_4_58032 00:14:10.660 size: 1.000366 MiB name: RG_ring_5_58032 00:14:10.660 size: 0.125366 MiB name: RG_ring_2_58032 00:14:10.660 size: 0.015991 MiB name: RG_ring_3_58032 00:14:10.660 end memzones------- 00:14:10.660 19:16:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:14:10.919 heap id: 0 total size: 810.000000 MiB number of busy elements: 312 number of free elements: 15 00:14:10.919 list of free elements. size: 10.813416 MiB 00:14:10.919 element at address: 0x200018a00000 with size: 0.999878 MiB 00:14:10.919 element at address: 0x200018c00000 with size: 0.999878 MiB 00:14:10.919 element at address: 0x200031800000 with size: 0.994446 MiB 00:14:10.919 element at address: 0x200000400000 with size: 0.993958 MiB 00:14:10.919 element at address: 0x200006400000 with size: 0.959839 MiB 00:14:10.919 element at address: 0x200012c00000 with size: 0.954285 MiB 00:14:10.919 element at address: 0x200018e00000 with size: 0.936584 MiB 00:14:10.919 element at address: 0x200000200000 with size: 0.717346 MiB 00:14:10.919 element at address: 0x20001a600000 with size: 0.567871 MiB 00:14:10.919 element at address: 0x20000a600000 with size: 0.488892 MiB 00:14:10.919 element at address: 0x200000c00000 with size: 0.487000 MiB 00:14:10.919 element at address: 0x200019000000 with size: 0.485657 MiB 00:14:10.919 element at address: 0x200003e00000 with size: 0.480286 MiB 00:14:10.919 element at address: 0x200027a00000 with size: 0.395752 MiB 00:14:10.919 element at address: 0x200000800000 with size: 0.351746 MiB 00:14:10.919 list of standard malloc elements. size: 199.267700 MiB 00:14:10.919 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:14:10.919 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:14:10.919 element at address: 0x200018afff80 with size: 1.000122 MiB 00:14:10.919 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:14:10.919 element at address: 0x200018efff80 with size: 1.000122 MiB 00:14:10.919 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:14:10.919 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:14:10.919 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:14:10.919 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:14:10.919 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:14:10.919 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:14:10.919 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:14:10.919 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:14:10.919 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000085e580 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000087e840 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000087e900 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000087f080 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000087f140 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000087f200 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000087f380 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000087f440 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000087f500 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000087f680 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000cff000 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200003efb980 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:14:10.920 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:14:10.920 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20001a691600 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20001a691780 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20001a691840 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20001a691900 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20001a692080 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20001a692140 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20001a692200 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20001a692380 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20001a692440 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20001a692500 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20001a692680 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20001a692740 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20001a692800 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20001a692980 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:14:10.920 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a693040 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a693100 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a693280 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a693340 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a693400 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a693580 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a693640 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a693700 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a693880 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a693940 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a694000 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a694180 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a694240 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a694300 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a694480 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a694540 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a694600 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a694780 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a694840 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a694900 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a695080 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a695140 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a695200 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a695380 with size: 0.000183 MiB 00:14:10.921 element at address: 0x20001a695440 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a65500 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6c1c0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6c3c0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:14:10.921 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:14:10.921 list of memzone associated elements. size: 599.918884 MiB 00:14:10.921 element at address: 0x20001a695500 with size: 211.416748 MiB 00:14:10.921 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:14:10.921 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:14:10.921 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:14:10.921 element at address: 0x200012df4780 with size: 92.045044 MiB 00:14:10.921 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58032_0 00:14:10.921 element at address: 0x200000dff380 with size: 48.003052 MiB 00:14:10.921 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58032_0 00:14:10.921 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:14:10.921 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58032_0 00:14:10.922 element at address: 0x2000191be940 with size: 20.255554 MiB 00:14:10.922 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:14:10.922 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:14:10.922 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:14:10.922 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:14:10.922 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58032_0 00:14:10.922 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:14:10.922 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58032 00:14:10.922 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:14:10.922 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58032 00:14:10.922 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:14:10.922 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:14:10.922 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:14:10.922 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:14:10.922 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:14:10.922 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:14:10.922 element at address: 0x200003efba40 with size: 1.008118 MiB 00:14:10.922 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:14:10.922 element at address: 0x200000cff180 with size: 1.000488 MiB 00:14:10.922 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58032 00:14:10.922 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:14:10.922 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58032 00:14:10.922 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:14:10.922 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58032 00:14:10.922 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:14:10.922 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58032 00:14:10.922 element at address: 0x20000087f740 with size: 0.500488 MiB 00:14:10.922 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58032 00:14:10.922 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:14:10.922 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58032 00:14:10.922 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:14:10.922 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:14:10.922 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:14:10.922 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:14:10.922 element at address: 0x20001907c540 with size: 0.250488 MiB 00:14:10.922 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:14:10.922 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:14:10.922 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58032 00:14:10.922 element at address: 0x20000085e640 with size: 0.125488 MiB 00:14:10.922 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58032 00:14:10.922 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:14:10.922 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:14:10.922 element at address: 0x200027a65680 with size: 0.023743 MiB 00:14:10.922 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:14:10.922 element at address: 0x20000085a380 with size: 0.016113 MiB 00:14:10.922 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58032 00:14:10.922 element at address: 0x200027a6b7c0 with size: 0.002441 MiB 00:14:10.922 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:14:10.922 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:14:10.922 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58032 00:14:10.922 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:14:10.922 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58032 00:14:10.922 element at address: 0x20000085a180 with size: 0.000305 MiB 00:14:10.922 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58032 00:14:10.922 element at address: 0x200027a6c280 with size: 0.000305 MiB 00:14:10.922 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:14:10.922 19:16:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:14:10.922 19:16:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58032 00:14:10.922 19:16:19 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 58032 ']' 00:14:10.922 19:16:19 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 58032 00:14:10.922 19:16:19 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:14:10.922 19:16:19 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:10.922 19:16:19 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58032 00:14:10.922 19:16:19 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:10.922 19:16:19 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:10.922 killing process with pid 58032 00:14:10.922 19:16:19 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58032' 00:14:10.922 19:16:20 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 58032 00:14:10.922 19:16:20 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 58032 00:14:11.180 00:14:11.180 real 0m1.275s 00:14:11.180 user 0m1.228s 00:14:11.180 sys 0m0.434s 00:14:11.180 19:16:20 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:11.180 19:16:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:14:11.180 ************************************ 00:14:11.180 END TEST dpdk_mem_utility 00:14:11.180 ************************************ 00:14:11.180 19:16:20 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:14:11.180 19:16:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:11.180 19:16:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:11.180 19:16:20 -- common/autotest_common.sh@10 -- # set +x 00:14:11.180 ************************************ 00:14:11.180 START TEST event 00:14:11.180 ************************************ 00:14:11.180 19:16:20 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:14:11.438 * Looking for test storage... 00:14:11.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:14:11.438 19:16:20 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:11.438 19:16:20 event -- common/autotest_common.sh@1691 -- # lcov --version 00:14:11.438 19:16:20 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:11.439 19:16:20 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:11.439 19:16:20 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:11.439 19:16:20 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:11.439 19:16:20 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:11.439 19:16:20 event -- scripts/common.sh@336 -- # IFS=.-: 00:14:11.439 19:16:20 event -- scripts/common.sh@336 -- # read -ra ver1 00:14:11.439 19:16:20 event -- scripts/common.sh@337 -- # IFS=.-: 00:14:11.439 19:16:20 event -- scripts/common.sh@337 -- # read -ra ver2 00:14:11.439 19:16:20 event -- scripts/common.sh@338 -- # local 'op=<' 00:14:11.439 19:16:20 event -- scripts/common.sh@340 -- # ver1_l=2 00:14:11.439 19:16:20 event -- scripts/common.sh@341 -- # ver2_l=1 00:14:11.439 19:16:20 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:11.439 19:16:20 event -- scripts/common.sh@344 -- # case "$op" in 00:14:11.439 19:16:20 event -- scripts/common.sh@345 -- # : 1 00:14:11.439 19:16:20 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:11.439 19:16:20 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:11.439 19:16:20 event -- scripts/common.sh@365 -- # decimal 1 00:14:11.439 19:16:20 event -- scripts/common.sh@353 -- # local d=1 00:14:11.439 19:16:20 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:11.439 19:16:20 event -- scripts/common.sh@355 -- # echo 1 00:14:11.439 19:16:20 event -- scripts/common.sh@365 -- # ver1[v]=1 00:14:11.439 19:16:20 event -- scripts/common.sh@366 -- # decimal 2 00:14:11.439 19:16:20 event -- scripts/common.sh@353 -- # local d=2 00:14:11.439 19:16:20 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:11.439 19:16:20 event -- scripts/common.sh@355 -- # echo 2 00:14:11.439 19:16:20 event -- scripts/common.sh@366 -- # ver2[v]=2 00:14:11.439 19:16:20 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:11.439 19:16:20 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:11.439 19:16:20 event -- scripts/common.sh@368 -- # return 0 00:14:11.439 19:16:20 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:11.439 19:16:20 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:11.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.439 --rc genhtml_branch_coverage=1 00:14:11.439 --rc genhtml_function_coverage=1 00:14:11.439 --rc genhtml_legend=1 00:14:11.439 --rc geninfo_all_blocks=1 00:14:11.439 --rc geninfo_unexecuted_blocks=1 00:14:11.439 00:14:11.439 ' 00:14:11.439 19:16:20 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:11.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.439 --rc genhtml_branch_coverage=1 00:14:11.439 --rc genhtml_function_coverage=1 00:14:11.439 --rc genhtml_legend=1 00:14:11.439 --rc geninfo_all_blocks=1 00:14:11.439 --rc geninfo_unexecuted_blocks=1 00:14:11.439 00:14:11.439 ' 00:14:11.439 19:16:20 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:11.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.439 --rc genhtml_branch_coverage=1 00:14:11.439 --rc genhtml_function_coverage=1 00:14:11.439 --rc genhtml_legend=1 00:14:11.439 --rc geninfo_all_blocks=1 00:14:11.439 --rc geninfo_unexecuted_blocks=1 00:14:11.439 00:14:11.439 ' 00:14:11.439 19:16:20 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:11.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.439 --rc genhtml_branch_coverage=1 00:14:11.439 --rc genhtml_function_coverage=1 00:14:11.439 --rc genhtml_legend=1 00:14:11.439 --rc geninfo_all_blocks=1 00:14:11.439 --rc geninfo_unexecuted_blocks=1 00:14:11.439 00:14:11.439 ' 00:14:11.439 19:16:20 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:11.439 19:16:20 event -- bdev/nbd_common.sh@6 -- # set -e 00:14:11.439 19:16:20 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:14:11.439 19:16:20 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:14:11.439 19:16:20 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:11.439 19:16:20 event -- common/autotest_common.sh@10 -- # set +x 00:14:11.439 ************************************ 00:14:11.439 START TEST event_perf 00:14:11.439 ************************************ 00:14:11.439 19:16:20 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:14:11.439 Running I/O for 1 seconds...[2024-10-17 19:16:20.639379] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:14:11.439 [2024-10-17 19:16:20.639462] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58110 ] 00:14:11.697 [2024-10-17 19:16:20.774631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:11.697 [2024-10-17 19:16:20.859877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.697 [2024-10-17 19:16:20.859973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.697 [2024-10-17 19:16:20.860100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:11.697 [2024-10-17 19:16:20.860108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.072 Running I/O for 1 seconds... 00:14:13.072 lcore 0: 112673 00:14:13.072 lcore 1: 112675 00:14:13.072 lcore 2: 112667 00:14:13.072 lcore 3: 112670 00:14:13.072 done. 00:14:13.072 00:14:13.072 real 0m1.291s 00:14:13.072 user 0m4.100s 00:14:13.072 sys 0m0.061s 00:14:13.072 19:16:21 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:13.072 19:16:21 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:14:13.072 ************************************ 00:14:13.072 END TEST event_perf 00:14:13.072 ************************************ 00:14:13.072 19:16:21 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:14:13.072 19:16:21 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:13.072 19:16:21 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:13.072 19:16:21 event -- common/autotest_common.sh@10 -- # set +x 00:14:13.072 ************************************ 00:14:13.072 START TEST event_reactor 00:14:13.072 ************************************ 00:14:13.072 19:16:21 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:14:13.072 [2024-10-17 19:16:21.985045] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:14:13.072 [2024-10-17 19:16:21.985170] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58143 ] 00:14:13.072 [2024-10-17 19:16:22.132967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.072 [2024-10-17 19:16:22.203531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.007 test_start 00:14:14.007 oneshot 00:14:14.007 tick 100 00:14:14.007 tick 100 00:14:14.007 tick 250 00:14:14.007 tick 100 00:14:14.007 tick 100 00:14:14.007 tick 100 00:14:14.007 tick 250 00:14:14.007 tick 500 00:14:14.007 tick 100 00:14:14.007 tick 100 00:14:14.007 tick 250 00:14:14.007 tick 100 00:14:14.007 tick 100 00:14:14.007 test_end 00:14:14.007 00:14:14.007 real 0m1.286s 00:14:14.007 user 0m1.124s 00:14:14.007 sys 0m0.056s 00:14:14.007 19:16:23 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:14.007 19:16:23 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:14:14.007 ************************************ 00:14:14.007 END TEST event_reactor 00:14:14.007 ************************************ 00:14:14.265 19:16:23 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:14:14.265 19:16:23 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:14.265 19:16:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:14.265 19:16:23 event -- common/autotest_common.sh@10 -- # set +x 00:14:14.265 ************************************ 00:14:14.265 START TEST event_reactor_perf 00:14:14.265 ************************************ 00:14:14.265 19:16:23 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:14:14.265 [2024-10-17 19:16:23.319908] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:14:14.265 [2024-10-17 19:16:23.320027] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58178 ] 00:14:14.265 [2024-10-17 19:16:23.498892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.525 [2024-10-17 19:16:23.575350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.466 test_start 00:14:15.466 test_end 00:14:15.466 Performance: 375064 events per second 00:14:15.466 00:14:15.466 real 0m1.323s 00:14:15.466 user 0m1.173s 00:14:15.466 sys 0m0.043s 00:14:15.466 ************************************ 00:14:15.466 END TEST event_reactor_perf 00:14:15.466 ************************************ 00:14:15.466 19:16:24 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:15.466 19:16:24 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:14:15.466 19:16:24 event -- event/event.sh@49 -- # uname -s 00:14:15.466 19:16:24 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:14:15.466 19:16:24 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:14:15.466 19:16:24 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:15.466 19:16:24 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:15.466 19:16:24 event -- common/autotest_common.sh@10 -- # set +x 00:14:15.466 ************************************ 00:14:15.466 START TEST event_scheduler 00:14:15.466 ************************************ 00:14:15.466 19:16:24 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:14:15.725 * Looking for test storage... 00:14:15.725 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:14:15.725 19:16:24 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:15.725 19:16:24 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:14:15.725 19:16:24 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:15.725 19:16:24 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:15.725 19:16:24 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:15.725 19:16:24 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:15.725 19:16:24 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:15.725 19:16:24 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:14:15.725 19:16:24 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:14:15.725 19:16:24 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:14:15.725 19:16:24 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:14:15.725 19:16:24 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:14:15.725 19:16:24 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:14:15.725 19:16:24 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:14:15.725 19:16:24 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:15.725 19:16:24 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:14:15.725 19:16:24 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:14:15.725 19:16:24 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:15.725 19:16:24 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:15.725 19:16:24 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:14:15.725 19:16:24 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:14:15.725 19:16:24 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:15.725 19:16:24 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:14:15.725 19:16:24 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:14:15.725 19:16:24 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:14:15.725 19:16:24 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:14:15.725 19:16:24 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:15.725 19:16:24 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:14:15.725 19:16:24 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:14:15.725 19:16:24 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:15.725 19:16:24 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:15.725 19:16:24 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:14:15.725 19:16:24 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:15.725 19:16:24 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:15.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.725 --rc genhtml_branch_coverage=1 00:14:15.725 --rc genhtml_function_coverage=1 00:14:15.725 --rc genhtml_legend=1 00:14:15.725 --rc geninfo_all_blocks=1 00:14:15.725 --rc geninfo_unexecuted_blocks=1 00:14:15.725 00:14:15.725 ' 00:14:15.725 19:16:24 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:15.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.725 --rc genhtml_branch_coverage=1 00:14:15.725 --rc genhtml_function_coverage=1 00:14:15.725 --rc genhtml_legend=1 00:14:15.725 --rc geninfo_all_blocks=1 00:14:15.725 --rc geninfo_unexecuted_blocks=1 00:14:15.725 00:14:15.725 ' 00:14:15.725 19:16:24 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:15.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.725 --rc genhtml_branch_coverage=1 00:14:15.725 --rc genhtml_function_coverage=1 00:14:15.725 --rc genhtml_legend=1 00:14:15.725 --rc geninfo_all_blocks=1 00:14:15.725 --rc geninfo_unexecuted_blocks=1 00:14:15.725 00:14:15.725 ' 00:14:15.725 19:16:24 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:15.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.725 --rc genhtml_branch_coverage=1 00:14:15.725 --rc genhtml_function_coverage=1 00:14:15.725 --rc genhtml_legend=1 00:14:15.725 --rc geninfo_all_blocks=1 00:14:15.725 --rc geninfo_unexecuted_blocks=1 00:14:15.725 00:14:15.725 ' 00:14:15.725 19:16:24 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:14:15.725 19:16:24 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58248 00:14:15.725 19:16:24 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:14:15.725 19:16:24 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:14:15.725 19:16:24 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58248 00:14:15.725 19:16:24 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58248 ']' 00:14:15.725 19:16:24 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.725 19:16:24 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:15.725 19:16:24 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.725 19:16:24 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:15.725 19:16:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:14:15.725 [2024-10-17 19:16:24.930575] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:14:15.725 [2024-10-17 19:16:24.930940] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58248 ] 00:14:15.983 [2024-10-17 19:16:25.074840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:15.983 [2024-10-17 19:16:25.146092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.983 [2024-10-17 19:16:25.146203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.983 [2024-10-17 19:16:25.146345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:15.983 [2024-10-17 19:16:25.146354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:16.917 19:16:25 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:16.917 19:16:25 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:14:16.917 19:16:25 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:14:16.917 19:16:25 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.917 19:16:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:14:16.917 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:14:16.917 POWER: Cannot set governor of lcore 0 to userspace 00:14:16.917 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:14:16.917 POWER: Cannot set governor of lcore 0 to performance 00:14:16.917 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:14:16.917 POWER: Cannot set governor of lcore 0 to userspace 00:14:16.917 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:14:16.917 POWER: Cannot set governor of lcore 0 to userspace 00:14:16.917 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:14:16.917 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:14:16.917 POWER: Unable to set Power Management Environment for lcore 0 00:14:16.917 [2024-10-17 19:16:25.985644] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:14:16.917 [2024-10-17 19:16:25.985769] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:14:16.917 [2024-10-17 19:16:25.985812] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:14:16.917 [2024-10-17 19:16:25.985932] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:14:16.917 [2024-10-17 19:16:25.985973] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:14:16.917 [2024-10-17 19:16:25.986004] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:14:16.917 19:16:25 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.917 19:16:25 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:14:16.917 19:16:25 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.917 19:16:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:14:16.917 [2024-10-17 19:16:26.044572] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:16.917 [2024-10-17 19:16:26.079503] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:14:16.917 19:16:26 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.917 19:16:26 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:14:16.917 19:16:26 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:16.917 19:16:26 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:16.917 19:16:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:14:16.917 ************************************ 00:14:16.917 START TEST scheduler_create_thread 00:14:16.917 ************************************ 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:16.918 2 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:16.918 3 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:16.918 4 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:16.918 5 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:16.918 6 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:16.918 7 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:16.918 8 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:16.918 9 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.918 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:17.176 10 00:14:17.177 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.177 19:16:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:14:17.177 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.177 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:17.177 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.177 19:16:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:14:17.177 19:16:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:14:17.177 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.177 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:17.435 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.435 19:16:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:14:17.435 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.435 19:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:19.338 19:16:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.338 19:16:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:14:19.338 19:16:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:14:19.338 19:16:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.338 19:16:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:20.272 ************************************ 00:14:20.272 END TEST scheduler_create_thread 00:14:20.272 ************************************ 00:14:20.272 19:16:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.272 00:14:20.272 real 0m3.094s 00:14:20.272 user 0m0.018s 00:14:20.272 sys 0m0.006s 00:14:20.272 19:16:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:20.272 19:16:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:20.272 19:16:29 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:14:20.272 19:16:29 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58248 00:14:20.272 19:16:29 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58248 ']' 00:14:20.272 19:16:29 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58248 00:14:20.272 19:16:29 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:14:20.272 19:16:29 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:20.272 19:16:29 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58248 00:14:20.272 killing process with pid 58248 00:14:20.272 19:16:29 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:20.272 19:16:29 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:20.272 19:16:29 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58248' 00:14:20.272 19:16:29 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58248 00:14:20.272 19:16:29 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58248 00:14:20.531 [2024-10-17 19:16:29.565234] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:14:20.789 ************************************ 00:14:20.789 END TEST event_scheduler 00:14:20.789 ************************************ 00:14:20.789 00:14:20.789 real 0m5.124s 00:14:20.789 user 0m10.229s 00:14:20.789 sys 0m0.379s 00:14:20.789 19:16:29 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:20.790 19:16:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:14:20.790 19:16:29 event -- event/event.sh@51 -- # modprobe -n nbd 00:14:20.790 19:16:29 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:14:20.790 19:16:29 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:20.790 19:16:29 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:20.790 19:16:29 event -- common/autotest_common.sh@10 -- # set +x 00:14:20.790 ************************************ 00:14:20.790 START TEST app_repeat 00:14:20.790 ************************************ 00:14:20.790 19:16:29 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:14:20.790 19:16:29 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:20.790 19:16:29 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:20.790 19:16:29 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:14:20.790 19:16:29 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:20.790 19:16:29 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:14:20.790 19:16:29 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:14:20.790 19:16:29 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:14:20.790 Process app_repeat pid: 58353 00:14:20.790 spdk_app_start Round 0 00:14:20.790 19:16:29 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58353 00:14:20.790 19:16:29 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:14:20.790 19:16:29 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:14:20.790 19:16:29 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58353' 00:14:20.790 19:16:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:14:20.790 19:16:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:14:20.790 19:16:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58353 /var/tmp/spdk-nbd.sock 00:14:20.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:20.790 19:16:29 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58353 ']' 00:14:20.790 19:16:29 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:20.790 19:16:29 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:20.790 19:16:29 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:20.790 19:16:29 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:20.790 19:16:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:14:20.790 [2024-10-17 19:16:29.888405] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:14:20.790 [2024-10-17 19:16:29.888722] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58353 ] 00:14:20.790 [2024-10-17 19:16:30.030850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:21.048 [2024-10-17 19:16:30.103867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.048 [2024-10-17 19:16:30.103881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.048 [2024-10-17 19:16:30.164394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:21.048 19:16:30 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:21.048 19:16:30 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:14:21.048 19:16:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:21.307 Malloc0 00:14:21.564 19:16:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:21.822 Malloc1 00:14:21.822 19:16:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:21.822 19:16:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:21.822 19:16:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:21.822 19:16:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:21.822 19:16:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:21.822 19:16:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:21.822 19:16:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:21.822 19:16:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:21.822 19:16:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:21.822 19:16:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:21.822 19:16:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:21.822 19:16:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:21.822 19:16:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:14:21.822 19:16:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:21.822 19:16:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:21.822 19:16:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:14:22.080 /dev/nbd0 00:14:22.080 19:16:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:22.080 19:16:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:22.080 19:16:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:22.080 19:16:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:14:22.080 19:16:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:22.080 19:16:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:22.080 19:16:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:22.080 19:16:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:14:22.080 19:16:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:22.080 19:16:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:22.080 19:16:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:22.080 1+0 records in 00:14:22.080 1+0 records out 00:14:22.080 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321852 s, 12.7 MB/s 00:14:22.080 19:16:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:22.080 19:16:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:14:22.080 19:16:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:22.080 19:16:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:22.080 19:16:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:14:22.080 19:16:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:22.080 19:16:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:22.080 19:16:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:14:22.646 /dev/nbd1 00:14:22.646 19:16:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:22.646 19:16:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:22.646 19:16:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:22.646 19:16:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:14:22.646 19:16:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:22.646 19:16:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:22.646 19:16:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:22.646 19:16:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:14:22.646 19:16:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:22.646 19:16:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:22.646 19:16:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:22.646 1+0 records in 00:14:22.646 1+0 records out 00:14:22.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359273 s, 11.4 MB/s 00:14:22.646 19:16:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:22.646 19:16:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:14:22.646 19:16:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:22.646 19:16:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:22.646 19:16:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:14:22.646 19:16:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:22.646 19:16:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:22.646 19:16:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:22.646 19:16:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:22.646 19:16:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:22.904 19:16:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:22.904 { 00:14:22.904 "nbd_device": "/dev/nbd0", 00:14:22.904 "bdev_name": "Malloc0" 00:14:22.904 }, 00:14:22.904 { 00:14:22.904 "nbd_device": "/dev/nbd1", 00:14:22.904 "bdev_name": "Malloc1" 00:14:22.904 } 00:14:22.904 ]' 00:14:22.904 19:16:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:22.904 { 00:14:22.904 "nbd_device": "/dev/nbd0", 00:14:22.904 "bdev_name": "Malloc0" 00:14:22.904 }, 00:14:22.904 { 00:14:22.904 "nbd_device": "/dev/nbd1", 00:14:22.904 "bdev_name": "Malloc1" 00:14:22.904 } 00:14:22.904 ]' 00:14:22.904 19:16:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:22.904 19:16:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:22.904 /dev/nbd1' 00:14:22.904 19:16:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:22.904 /dev/nbd1' 00:14:22.904 19:16:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:22.904 19:16:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:14:22.904 19:16:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:14:22.904 19:16:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:14:22.904 19:16:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:14:22.904 19:16:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:14:22.904 19:16:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:22.904 19:16:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:22.904 19:16:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:22.904 19:16:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:22.904 19:16:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:22.904 19:16:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:14:22.904 256+0 records in 00:14:22.904 256+0 records out 00:14:22.904 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00934332 s, 112 MB/s 00:14:22.904 19:16:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:22.904 19:16:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:22.904 256+0 records in 00:14:22.904 256+0 records out 00:14:22.904 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0326266 s, 32.1 MB/s 00:14:22.904 19:16:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:22.904 19:16:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:22.904 256+0 records in 00:14:22.904 256+0 records out 00:14:22.904 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259169 s, 40.5 MB/s 00:14:22.904 19:16:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:14:22.904 19:16:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:22.904 19:16:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:22.904 19:16:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:22.904 19:16:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:22.904 19:16:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:22.904 19:16:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:22.904 19:16:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:22.904 19:16:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:14:22.904 19:16:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:22.905 19:16:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:14:22.905 19:16:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:22.905 19:16:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:14:22.905 19:16:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:22.905 19:16:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:22.905 19:16:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:22.905 19:16:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:14:22.905 19:16:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:22.905 19:16:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:23.471 19:16:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:23.471 19:16:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:23.471 19:16:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:23.471 19:16:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:23.471 19:16:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:23.471 19:16:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:23.471 19:16:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:14:23.471 19:16:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:14:23.471 19:16:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:23.471 19:16:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:23.471 19:16:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:23.730 19:16:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:23.730 19:16:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:23.730 19:16:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:23.730 19:16:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:23.730 19:16:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:23.730 19:16:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:14:23.730 19:16:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:14:23.730 19:16:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:23.730 19:16:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:23.730 19:16:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:23.988 19:16:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:23.988 19:16:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:23.988 19:16:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:23.988 19:16:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:23.988 19:16:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:14:23.988 19:16:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:23.988 19:16:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:14:23.988 19:16:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:14:23.988 19:16:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:14:23.988 19:16:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:14:23.988 19:16:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:23.988 19:16:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:14:23.988 19:16:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:14:24.604 19:16:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:14:24.604 [2024-10-17 19:16:33.658280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:24.604 [2024-10-17 19:16:33.716031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.604 [2024-10-17 19:16:33.716045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.604 [2024-10-17 19:16:33.769887] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:24.604 [2024-10-17 19:16:33.770233] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:14:24.604 [2024-10-17 19:16:33.770392] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:14:27.887 19:16:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:14:27.887 19:16:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:14:27.887 spdk_app_start Round 1 00:14:27.887 19:16:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58353 /var/tmp/spdk-nbd.sock 00:14:27.887 19:16:36 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58353 ']' 00:14:27.887 19:16:36 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:27.887 19:16:36 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:27.887 19:16:36 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:27.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:27.887 19:16:36 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:27.887 19:16:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:14:27.887 19:16:36 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:27.887 19:16:36 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:14:27.887 19:16:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:27.887 Malloc0 00:14:28.145 19:16:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:28.403 Malloc1 00:14:28.403 19:16:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:28.403 19:16:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:28.403 19:16:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:28.403 19:16:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:28.403 19:16:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:28.403 19:16:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:28.403 19:16:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:28.403 19:16:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:28.403 19:16:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:28.403 19:16:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:28.403 19:16:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:28.403 19:16:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:28.403 19:16:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:14:28.403 19:16:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:28.403 19:16:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:28.403 19:16:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:14:28.662 /dev/nbd0 00:14:28.662 19:16:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:28.662 19:16:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:28.662 19:16:37 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:28.662 19:16:37 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:14:28.662 19:16:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:28.662 19:16:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:28.662 19:16:37 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:28.662 19:16:37 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:14:28.662 19:16:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:28.662 19:16:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:28.662 19:16:37 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:28.662 1+0 records in 00:14:28.662 1+0 records out 00:14:28.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570337 s, 7.2 MB/s 00:14:28.662 19:16:37 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:28.662 19:16:37 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:14:28.662 19:16:37 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:28.662 19:16:37 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:28.662 19:16:37 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:14:28.662 19:16:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:28.662 19:16:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:28.662 19:16:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:14:28.921 /dev/nbd1 00:14:28.921 19:16:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:28.921 19:16:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:28.921 19:16:38 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:28.921 19:16:38 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:14:28.921 19:16:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:28.921 19:16:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:28.921 19:16:38 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:28.921 19:16:38 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:14:28.921 19:16:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:28.921 19:16:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:28.921 19:16:38 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:28.921 1+0 records in 00:14:28.921 1+0 records out 00:14:28.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038752 s, 10.6 MB/s 00:14:28.921 19:16:38 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:28.921 19:16:38 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:14:28.921 19:16:38 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:28.921 19:16:38 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:28.921 19:16:38 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:14:28.921 19:16:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:28.921 19:16:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:28.921 19:16:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:28.921 19:16:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:28.921 19:16:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:29.488 { 00:14:29.488 "nbd_device": "/dev/nbd0", 00:14:29.488 "bdev_name": "Malloc0" 00:14:29.488 }, 00:14:29.488 { 00:14:29.488 "nbd_device": "/dev/nbd1", 00:14:29.488 "bdev_name": "Malloc1" 00:14:29.488 } 00:14:29.488 ]' 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:29.488 { 00:14:29.488 "nbd_device": "/dev/nbd0", 00:14:29.488 "bdev_name": "Malloc0" 00:14:29.488 }, 00:14:29.488 { 00:14:29.488 "nbd_device": "/dev/nbd1", 00:14:29.488 "bdev_name": "Malloc1" 00:14:29.488 } 00:14:29.488 ]' 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:29.488 /dev/nbd1' 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:29.488 /dev/nbd1' 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:14:29.488 256+0 records in 00:14:29.488 256+0 records out 00:14:29.488 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00763839 s, 137 MB/s 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:29.488 256+0 records in 00:14:29.488 256+0 records out 00:14:29.488 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225345 s, 46.5 MB/s 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:29.488 256+0 records in 00:14:29.488 256+0 records out 00:14:29.488 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0266402 s, 39.4 MB/s 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:29.488 19:16:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:29.747 19:16:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:29.747 19:16:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:29.747 19:16:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:29.747 19:16:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:29.747 19:16:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:29.747 19:16:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:29.747 19:16:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:14:29.747 19:16:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:14:29.747 19:16:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:29.747 19:16:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:30.006 19:16:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:30.006 19:16:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:30.006 19:16:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:30.006 19:16:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:30.006 19:16:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:30.006 19:16:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:30.006 19:16:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:14:30.006 19:16:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:14:30.006 19:16:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:30.006 19:16:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:30.006 19:16:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:30.265 19:16:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:30.265 19:16:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:30.265 19:16:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:30.265 19:16:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:30.265 19:16:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:30.265 19:16:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:14:30.265 19:16:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:14:30.523 19:16:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:14:30.523 19:16:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:14:30.523 19:16:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:14:30.523 19:16:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:30.523 19:16:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:14:30.524 19:16:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:14:30.782 19:16:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:14:30.782 [2024-10-17 19:16:39.992580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:31.040 [2024-10-17 19:16:40.047623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.040 [2024-10-17 19:16:40.047633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.040 [2024-10-17 19:16:40.102558] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:31.040 [2024-10-17 19:16:40.102880] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:14:31.040 [2024-10-17 19:16:40.103026] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:14:33.595 19:16:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:14:33.595 19:16:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:14:33.595 spdk_app_start Round 2 00:14:33.595 19:16:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58353 /var/tmp/spdk-nbd.sock 00:14:33.595 19:16:42 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58353 ']' 00:14:33.595 19:16:42 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:33.595 19:16:42 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:33.595 19:16:42 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:33.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:33.595 19:16:42 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:33.595 19:16:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:14:34.162 19:16:43 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:34.162 19:16:43 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:14:34.162 19:16:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:34.420 Malloc0 00:14:34.420 19:16:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:34.678 Malloc1 00:14:34.678 19:16:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:34.678 19:16:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:34.678 19:16:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:34.678 19:16:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:34.678 19:16:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:34.678 19:16:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:34.678 19:16:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:34.678 19:16:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:34.678 19:16:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:34.678 19:16:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:34.678 19:16:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:34.678 19:16:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:34.678 19:16:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:14:34.678 19:16:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:34.678 19:16:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:34.678 19:16:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:14:35.245 /dev/nbd0 00:14:35.245 19:16:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:35.245 19:16:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:35.245 19:16:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:35.245 19:16:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:14:35.245 19:16:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:35.245 19:16:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:35.245 19:16:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:35.245 19:16:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:14:35.245 19:16:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:35.245 19:16:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:35.245 19:16:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:35.245 1+0 records in 00:14:35.245 1+0 records out 00:14:35.245 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234049 s, 17.5 MB/s 00:14:35.245 19:16:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:35.245 19:16:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:14:35.245 19:16:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:35.245 19:16:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:35.245 19:16:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:14:35.245 19:16:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:35.245 19:16:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:35.245 19:16:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:14:35.503 /dev/nbd1 00:14:35.503 19:16:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:35.503 19:16:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:35.503 19:16:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:35.503 19:16:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:14:35.503 19:16:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:35.503 19:16:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:35.503 19:16:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:35.503 19:16:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:14:35.503 19:16:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:35.503 19:16:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:35.503 19:16:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:35.503 1+0 records in 00:14:35.503 1+0 records out 00:14:35.503 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296449 s, 13.8 MB/s 00:14:35.503 19:16:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:35.503 19:16:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:14:35.503 19:16:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:35.503 19:16:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:35.503 19:16:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:14:35.503 19:16:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:35.503 19:16:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:35.503 19:16:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:35.503 19:16:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:35.503 19:16:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:35.763 19:16:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:35.763 { 00:14:35.763 "nbd_device": "/dev/nbd0", 00:14:35.763 "bdev_name": "Malloc0" 00:14:35.763 }, 00:14:35.763 { 00:14:35.763 "nbd_device": "/dev/nbd1", 00:14:35.763 "bdev_name": "Malloc1" 00:14:35.763 } 00:14:35.763 ]' 00:14:35.763 19:16:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:35.763 { 00:14:35.763 "nbd_device": "/dev/nbd0", 00:14:35.763 "bdev_name": "Malloc0" 00:14:35.763 }, 00:14:35.763 { 00:14:35.763 "nbd_device": "/dev/nbd1", 00:14:35.763 "bdev_name": "Malloc1" 00:14:35.763 } 00:14:35.763 ]' 00:14:35.763 19:16:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:35.763 19:16:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:35.763 /dev/nbd1' 00:14:35.763 19:16:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:35.763 /dev/nbd1' 00:14:35.763 19:16:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:35.763 19:16:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:14:35.763 19:16:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:14:35.763 19:16:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:14:35.763 19:16:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:14:35.763 19:16:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:14:35.763 19:16:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:35.763 19:16:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:35.763 19:16:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:35.763 19:16:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:35.763 19:16:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:35.763 19:16:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:14:35.763 256+0 records in 00:14:35.763 256+0 records out 00:14:35.763 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00653453 s, 160 MB/s 00:14:35.763 19:16:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:35.763 19:16:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:35.763 256+0 records in 00:14:35.763 256+0 records out 00:14:35.763 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022996 s, 45.6 MB/s 00:14:35.763 19:16:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:35.763 19:16:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:36.021 256+0 records in 00:14:36.021 256+0 records out 00:14:36.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250544 s, 41.9 MB/s 00:14:36.021 19:16:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:14:36.021 19:16:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:36.021 19:16:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:36.021 19:16:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:36.021 19:16:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:36.021 19:16:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:36.021 19:16:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:36.021 19:16:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:36.021 19:16:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:14:36.021 19:16:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:36.021 19:16:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:14:36.021 19:16:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:36.021 19:16:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:14:36.021 19:16:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:36.021 19:16:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:36.021 19:16:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:36.021 19:16:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:14:36.021 19:16:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:36.021 19:16:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:36.279 19:16:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:36.279 19:16:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:36.279 19:16:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:36.279 19:16:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:36.280 19:16:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:36.280 19:16:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:36.280 19:16:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:14:36.280 19:16:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:14:36.280 19:16:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:36.280 19:16:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:36.538 19:16:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:36.538 19:16:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:36.538 19:16:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:36.538 19:16:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:36.538 19:16:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:36.538 19:16:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:36.538 19:16:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:14:36.538 19:16:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:14:36.538 19:16:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:36.538 19:16:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:36.538 19:16:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:37.174 19:16:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:37.174 19:16:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:37.174 19:16:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:37.174 19:16:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:37.174 19:16:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:14:37.174 19:16:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:37.174 19:16:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:14:37.174 19:16:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:14:37.174 19:16:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:14:37.174 19:16:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:14:37.174 19:16:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:37.174 19:16:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:14:37.174 19:16:46 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:14:37.447 19:16:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:14:37.447 [2024-10-17 19:16:46.589967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:37.447 [2024-10-17 19:16:46.650063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.447 [2024-10-17 19:16:46.650075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.706 [2024-10-17 19:16:46.703550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:37.706 [2024-10-17 19:16:46.703654] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:14:37.706 [2024-10-17 19:16:46.703669] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:14:40.237 19:16:49 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58353 /var/tmp/spdk-nbd.sock 00:14:40.237 19:16:49 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58353 ']' 00:14:40.237 19:16:49 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:40.237 19:16:49 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:40.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:40.237 19:16:49 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:40.237 19:16:49 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:40.237 19:16:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:14:40.803 19:16:49 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:40.803 19:16:49 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:14:40.803 19:16:49 event.app_repeat -- event/event.sh@39 -- # killprocess 58353 00:14:40.803 19:16:49 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58353 ']' 00:14:40.803 19:16:49 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58353 00:14:40.803 19:16:49 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:14:40.803 19:16:49 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:40.803 19:16:49 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58353 00:14:40.803 19:16:49 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:40.803 killing process with pid 58353 00:14:40.804 19:16:49 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:40.804 19:16:49 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58353' 00:14:40.804 19:16:49 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58353 00:14:40.804 19:16:49 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58353 00:14:40.804 spdk_app_start is called in Round 0. 00:14:40.804 Shutdown signal received, stop current app iteration 00:14:40.804 Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 reinitialization... 00:14:40.804 spdk_app_start is called in Round 1. 00:14:40.804 Shutdown signal received, stop current app iteration 00:14:40.804 Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 reinitialization... 00:14:40.804 spdk_app_start is called in Round 2. 00:14:40.804 Shutdown signal received, stop current app iteration 00:14:40.804 Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 reinitialization... 00:14:40.804 spdk_app_start is called in Round 3. 00:14:40.804 Shutdown signal received, stop current app iteration 00:14:40.804 19:16:50 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:14:40.804 19:16:50 event.app_repeat -- event/event.sh@42 -- # return 0 00:14:40.804 00:14:40.804 real 0m20.151s 00:14:40.804 user 0m46.522s 00:14:40.804 sys 0m3.053s 00:14:40.804 19:16:50 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:40.804 ************************************ 00:14:40.804 END TEST app_repeat 00:14:40.804 ************************************ 00:14:40.804 19:16:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:14:40.804 19:16:50 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:14:40.804 19:16:50 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:14:40.804 19:16:50 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:40.804 19:16:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:40.804 19:16:50 event -- common/autotest_common.sh@10 -- # set +x 00:14:41.062 ************************************ 00:14:41.062 START TEST cpu_locks 00:14:41.062 ************************************ 00:14:41.062 19:16:50 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:14:41.062 * Looking for test storage... 00:14:41.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:14:41.062 19:16:50 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:41.062 19:16:50 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:14:41.062 19:16:50 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:41.062 19:16:50 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:41.062 19:16:50 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:41.062 19:16:50 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:41.062 19:16:50 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:41.062 19:16:50 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:14:41.062 19:16:50 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:14:41.062 19:16:50 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:14:41.062 19:16:50 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:14:41.062 19:16:50 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:14:41.062 19:16:50 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:14:41.062 19:16:50 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:14:41.062 19:16:50 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:41.062 19:16:50 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:14:41.062 19:16:50 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:14:41.062 19:16:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:41.062 19:16:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:41.062 19:16:50 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:14:41.062 19:16:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:14:41.062 19:16:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:41.062 19:16:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:14:41.062 19:16:50 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:14:41.062 19:16:50 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:14:41.062 19:16:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:14:41.062 19:16:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:41.062 19:16:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:14:41.062 19:16:50 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:14:41.062 19:16:50 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:41.062 19:16:50 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:41.062 19:16:50 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:14:41.062 19:16:50 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:41.062 19:16:50 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:41.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.062 --rc genhtml_branch_coverage=1 00:14:41.062 --rc genhtml_function_coverage=1 00:14:41.062 --rc genhtml_legend=1 00:14:41.063 --rc geninfo_all_blocks=1 00:14:41.063 --rc geninfo_unexecuted_blocks=1 00:14:41.063 00:14:41.063 ' 00:14:41.063 19:16:50 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:41.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.063 --rc genhtml_branch_coverage=1 00:14:41.063 --rc genhtml_function_coverage=1 00:14:41.063 --rc genhtml_legend=1 00:14:41.063 --rc geninfo_all_blocks=1 00:14:41.063 --rc geninfo_unexecuted_blocks=1 00:14:41.063 00:14:41.063 ' 00:14:41.063 19:16:50 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:41.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.063 --rc genhtml_branch_coverage=1 00:14:41.063 --rc genhtml_function_coverage=1 00:14:41.063 --rc genhtml_legend=1 00:14:41.063 --rc geninfo_all_blocks=1 00:14:41.063 --rc geninfo_unexecuted_blocks=1 00:14:41.063 00:14:41.063 ' 00:14:41.063 19:16:50 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:41.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.063 --rc genhtml_branch_coverage=1 00:14:41.063 --rc genhtml_function_coverage=1 00:14:41.063 --rc genhtml_legend=1 00:14:41.063 --rc geninfo_all_blocks=1 00:14:41.063 --rc geninfo_unexecuted_blocks=1 00:14:41.063 00:14:41.063 ' 00:14:41.063 19:16:50 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:14:41.063 19:16:50 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:14:41.063 19:16:50 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:14:41.063 19:16:50 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:14:41.063 19:16:50 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:41.063 19:16:50 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:41.063 19:16:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:41.063 ************************************ 00:14:41.063 START TEST default_locks 00:14:41.063 ************************************ 00:14:41.063 19:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:14:41.063 19:16:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58803 00:14:41.063 19:16:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:14:41.063 19:16:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58803 00:14:41.063 19:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58803 ']' 00:14:41.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.063 19:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.063 19:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:41.063 19:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.063 19:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:41.063 19:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:14:41.322 [2024-10-17 19:16:50.322372] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:14:41.322 [2024-10-17 19:16:50.322710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58803 ] 00:14:41.322 [2024-10-17 19:16:50.464751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.322 [2024-10-17 19:16:50.540167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.580 [2024-10-17 19:16:50.623667] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:41.838 19:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:41.838 19:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:14:41.838 19:16:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58803 00:14:41.838 19:16:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58803 00:14:41.838 19:16:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:14:42.096 19:16:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58803 00:14:42.096 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 58803 ']' 00:14:42.096 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 58803 00:14:42.096 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:14:42.096 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:42.096 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58803 00:14:42.096 killing process with pid 58803 00:14:42.096 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:42.096 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:42.096 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58803' 00:14:42.096 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 58803 00:14:42.096 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 58803 00:14:42.663 19:16:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58803 00:14:42.663 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:14:42.663 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58803 00:14:42.663 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:14:42.663 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.663 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:14:42.663 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.663 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58803 00:14:42.663 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58803 ']' 00:14:42.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.663 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.663 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:42.663 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.663 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:42.663 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:14:42.663 ERROR: process (pid: 58803) is no longer running 00:14:42.663 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58803) - No such process 00:14:42.663 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:42.663 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:14:42.663 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:14:42.663 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:42.663 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:42.663 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:42.663 19:16:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:14:42.663 19:16:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:14:42.663 19:16:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:14:42.663 19:16:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:14:42.663 00:14:42.663 real 0m1.612s 00:14:42.663 user 0m1.560s 00:14:42.663 sys 0m0.577s 00:14:42.663 ************************************ 00:14:42.663 END TEST default_locks 00:14:42.663 ************************************ 00:14:42.663 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:42.663 19:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:14:42.663 19:16:51 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:14:42.663 19:16:51 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:42.663 19:16:51 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:42.663 19:16:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:42.663 ************************************ 00:14:42.663 START TEST default_locks_via_rpc 00:14:42.663 ************************************ 00:14:42.663 19:16:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:14:42.663 19:16:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58847 00:14:42.663 19:16:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:14:42.663 19:16:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58847 00:14:42.663 19:16:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58847 ']' 00:14:42.663 19:16:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.663 19:16:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:42.663 19:16:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.663 19:16:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:42.663 19:16:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.949 [2024-10-17 19:16:51.977083] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:14:42.949 [2024-10-17 19:16:51.977262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58847 ] 00:14:42.949 [2024-10-17 19:16:52.113424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.949 [2024-10-17 19:16:52.197022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.215 [2024-10-17 19:16:52.297188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:43.474 19:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:43.474 19:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:14:43.474 19:16:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:14:43.474 19:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.474 19:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.474 19:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.474 19:16:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:14:43.474 19:16:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:14:43.474 19:16:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:14:43.474 19:16:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:14:43.474 19:16:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:14:43.474 19:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.474 19:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.474 19:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.474 19:16:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58847 00:14:43.474 19:16:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58847 00:14:43.474 19:16:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:14:44.043 19:16:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58847 00:14:44.043 19:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 58847 ']' 00:14:44.043 19:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 58847 00:14:44.043 19:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:14:44.043 19:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:44.043 19:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58847 00:14:44.043 killing process with pid 58847 00:14:44.043 19:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:44.043 19:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:44.043 19:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58847' 00:14:44.043 19:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 58847 00:14:44.043 19:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 58847 00:14:44.610 ************************************ 00:14:44.610 END TEST default_locks_via_rpc 00:14:44.610 ************************************ 00:14:44.610 00:14:44.610 real 0m1.684s 00:14:44.610 user 0m1.619s 00:14:44.610 sys 0m0.602s 00:14:44.610 19:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:44.610 19:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.610 19:16:53 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:14:44.610 19:16:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:44.610 19:16:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:44.610 19:16:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:44.610 ************************************ 00:14:44.610 START TEST non_locking_app_on_locked_coremask 00:14:44.610 ************************************ 00:14:44.610 19:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:14:44.610 19:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58896 00:14:44.610 19:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58896 /var/tmp/spdk.sock 00:14:44.610 19:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58896 ']' 00:14:44.610 19:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:14:44.610 19:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.610 19:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:44.610 19:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.610 19:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:44.610 19:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:44.610 [2024-10-17 19:16:53.700270] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:14:44.610 [2024-10-17 19:16:53.700364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58896 ] 00:14:44.611 [2024-10-17 19:16:53.834682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.869 [2024-10-17 19:16:53.909871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.869 [2024-10-17 19:16:53.986120] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:45.127 19:16:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:45.127 19:16:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:14:45.127 19:16:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58905 00:14:45.127 19:16:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:14:45.127 19:16:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58905 /var/tmp/spdk2.sock 00:14:45.127 19:16:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58905 ']' 00:14:45.127 19:16:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:45.127 19:16:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:45.127 19:16:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:45.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:45.127 19:16:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:45.127 19:16:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:45.127 [2024-10-17 19:16:54.311490] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:14:45.127 [2024-10-17 19:16:54.311916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58905 ] 00:14:45.385 [2024-10-17 19:16:54.461040] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:14:45.385 [2024-10-17 19:16:54.461114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.385 [2024-10-17 19:16:54.622293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.643 [2024-10-17 19:16:54.823432] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:46.288 19:16:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:46.288 19:16:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:14:46.288 19:16:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58896 00:14:46.288 19:16:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:14:46.288 19:16:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58896 00:14:47.221 19:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58896 00:14:47.221 19:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58896 ']' 00:14:47.222 19:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58896 00:14:47.222 19:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:14:47.222 19:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:47.222 19:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58896 00:14:47.222 killing process with pid 58896 00:14:47.222 19:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:47.222 19:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:47.222 19:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58896' 00:14:47.222 19:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58896 00:14:47.222 19:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58896 00:14:48.151 19:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58905 00:14:48.151 19:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58905 ']' 00:14:48.151 19:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58905 00:14:48.151 19:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:14:48.151 19:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:48.151 19:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58905 00:14:48.151 killing process with pid 58905 00:14:48.151 19:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:48.151 19:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:48.151 19:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58905' 00:14:48.151 19:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58905 00:14:48.151 19:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58905 00:14:48.714 00:14:48.714 real 0m4.145s 00:14:48.714 user 0m4.381s 00:14:48.714 sys 0m1.223s 00:14:48.714 19:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:48.714 19:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:48.714 ************************************ 00:14:48.714 END TEST non_locking_app_on_locked_coremask 00:14:48.714 ************************************ 00:14:48.714 19:16:57 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:14:48.714 19:16:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:48.714 19:16:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:48.714 19:16:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:48.714 ************************************ 00:14:48.714 START TEST locking_app_on_unlocked_coremask 00:14:48.714 ************************************ 00:14:48.714 19:16:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:14:48.714 19:16:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58977 00:14:48.714 19:16:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58977 /var/tmp/spdk.sock 00:14:48.714 19:16:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:14:48.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.714 19:16:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58977 ']' 00:14:48.714 19:16:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.714 19:16:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:48.714 19:16:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.714 19:16:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:48.714 19:16:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:48.714 [2024-10-17 19:16:57.920576] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:14:48.714 [2024-10-17 19:16:57.920734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58977 ] 00:14:48.971 [2024-10-17 19:16:58.061879] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:14:48.971 [2024-10-17 19:16:58.061940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.971 [2024-10-17 19:16:58.148830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.229 [2024-10-17 19:16:58.252510] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:49.486 19:16:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:49.486 19:16:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:14:49.486 19:16:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58986 00:14:49.486 19:16:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:14:49.486 19:16:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58986 /var/tmp/spdk2.sock 00:14:49.486 19:16:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58986 ']' 00:14:49.486 19:16:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:49.486 19:16:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:49.486 19:16:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:49.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:49.486 19:16:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:49.486 19:16:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:49.486 [2024-10-17 19:16:58.583054] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:14:49.486 [2024-10-17 19:16:58.583195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58986 ] 00:14:49.486 [2024-10-17 19:16:58.733866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.744 [2024-10-17 19:16:58.894181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.002 [2024-10-17 19:16:59.088591] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:50.582 19:16:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:50.582 19:16:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:14:50.582 19:16:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58986 00:14:50.582 19:16:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58986 00:14:50.582 19:16:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:14:51.156 19:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58977 00:14:51.156 19:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58977 ']' 00:14:51.156 19:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 58977 00:14:51.157 19:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:14:51.157 19:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:51.157 19:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58977 00:14:51.157 19:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:51.157 19:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:51.157 killing process with pid 58977 00:14:51.157 19:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58977' 00:14:51.157 19:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 58977 00:14:51.157 19:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 58977 00:14:52.530 19:17:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58986 00:14:52.530 19:17:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58986 ']' 00:14:52.530 19:17:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 58986 00:14:52.530 19:17:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:14:52.530 19:17:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:52.530 19:17:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58986 00:14:52.530 19:17:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:52.530 killing process with pid 58986 00:14:52.530 19:17:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:52.530 19:17:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58986' 00:14:52.530 19:17:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 58986 00:14:52.530 19:17:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 58986 00:14:52.788 00:14:52.788 real 0m4.151s 00:14:52.788 user 0m4.375s 00:14:52.788 sys 0m1.221s 00:14:52.788 19:17:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:52.788 ************************************ 00:14:52.788 END TEST locking_app_on_unlocked_coremask 00:14:52.788 ************************************ 00:14:52.788 19:17:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:52.788 19:17:02 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:14:52.788 19:17:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:52.788 19:17:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:52.788 19:17:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:52.788 ************************************ 00:14:52.788 START TEST locking_app_on_locked_coremask 00:14:52.788 ************************************ 00:14:52.788 19:17:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:14:52.788 19:17:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59058 00:14:52.788 19:17:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59058 /var/tmp/spdk.sock 00:14:52.788 19:17:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:14:52.788 19:17:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59058 ']' 00:14:52.788 19:17:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.788 19:17:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:52.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.788 19:17:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.788 19:17:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:52.788 19:17:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:53.047 [2024-10-17 19:17:02.103661] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:14:53.047 [2024-10-17 19:17:02.103768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59058 ] 00:14:53.047 [2024-10-17 19:17:02.241123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.306 [2024-10-17 19:17:02.352282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.306 [2024-10-17 19:17:02.452666] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:54.244 19:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:54.244 19:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:14:54.244 19:17:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59074 00:14:54.244 19:17:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59074 /var/tmp/spdk2.sock 00:14:54.244 19:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:14:54.244 19:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59074 /var/tmp/spdk2.sock 00:14:54.244 19:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:14:54.244 19:17:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:14:54.244 19:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:54.244 19:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:14:54.244 19:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:54.244 19:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59074 /var/tmp/spdk2.sock 00:14:54.244 19:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59074 ']' 00:14:54.244 19:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:54.245 19:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:54.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:54.245 19:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:54.245 19:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:54.245 19:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:54.245 [2024-10-17 19:17:03.284590] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:14:54.245 [2024-10-17 19:17:03.284704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59074 ] 00:14:54.245 [2024-10-17 19:17:03.431809] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59058 has claimed it. 00:14:54.245 [2024-10-17 19:17:03.431887] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:14:54.812 ERROR: process (pid: 59074) is no longer running 00:14:54.812 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59074) - No such process 00:14:54.812 19:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:54.812 19:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:14:54.812 19:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:14:54.812 19:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:54.812 19:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:54.812 19:17:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:54.812 19:17:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59058 00:14:54.812 19:17:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59058 00:14:54.812 19:17:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:14:55.378 19:17:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59058 00:14:55.378 19:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59058 ']' 00:14:55.378 19:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59058 00:14:55.378 19:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:14:55.378 19:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:55.378 19:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59058 00:14:55.378 killing process with pid 59058 00:14:55.379 19:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:55.379 19:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:55.379 19:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59058' 00:14:55.379 19:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59058 00:14:55.379 19:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59058 00:14:55.946 00:14:55.946 real 0m2.964s 00:14:55.946 user 0m3.414s 00:14:55.946 sys 0m0.790s 00:14:55.946 19:17:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:55.946 ************************************ 00:14:55.946 END TEST locking_app_on_locked_coremask 00:14:55.946 ************************************ 00:14:55.946 19:17:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:55.946 19:17:05 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:14:55.946 19:17:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:55.946 19:17:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:55.946 19:17:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:55.946 ************************************ 00:14:55.946 START TEST locking_overlapped_coremask 00:14:55.946 ************************************ 00:14:55.946 19:17:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:14:55.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.946 19:17:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59125 00:14:55.946 19:17:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59125 /var/tmp/spdk.sock 00:14:55.946 19:17:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59125 ']' 00:14:55.946 19:17:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:14:55.946 19:17:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.946 19:17:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:55.946 19:17:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.946 19:17:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:55.946 19:17:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:55.946 [2024-10-17 19:17:05.133457] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:14:55.946 [2024-10-17 19:17:05.133580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59125 ] 00:14:56.204 [2024-10-17 19:17:05.273525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:56.204 [2024-10-17 19:17:05.342816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.204 [2024-10-17 19:17:05.342952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.204 [2024-10-17 19:17:05.342956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.204 [2024-10-17 19:17:05.418632] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:57.140 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:57.140 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:14:57.140 19:17:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59143 00:14:57.140 19:17:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59143 /var/tmp/spdk2.sock 00:14:57.140 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:14:57.140 19:17:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:14:57.140 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59143 /var/tmp/spdk2.sock 00:14:57.140 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:14:57.140 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:57.140 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:14:57.140 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:57.140 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59143 /var/tmp/spdk2.sock 00:14:57.140 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59143 ']' 00:14:57.140 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:57.140 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:57.140 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:57.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:57.140 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:57.140 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:57.140 [2024-10-17 19:17:06.187816] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:14:57.140 [2024-10-17 19:17:06.188146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59143 ] 00:14:57.140 [2024-10-17 19:17:06.332625] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59125 has claimed it. 00:14:57.140 [2024-10-17 19:17:06.332703] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:14:57.715 ERROR: process (pid: 59143) is no longer running 00:14:57.715 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59143) - No such process 00:14:57.715 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:57.715 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:14:57.715 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:14:57.715 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:57.715 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:57.715 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:57.715 19:17:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:14:57.715 19:17:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:14:57.715 19:17:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:14:57.715 19:17:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:14:57.715 19:17:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59125 00:14:57.715 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59125 ']' 00:14:57.715 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59125 00:14:57.715 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:14:57.715 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:57.716 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59125 00:14:57.974 killing process with pid 59125 00:14:57.974 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:57.974 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:57.974 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59125' 00:14:57.974 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59125 00:14:57.974 19:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59125 00:14:58.232 00:14:58.232 real 0m2.306s 00:14:58.232 user 0m6.590s 00:14:58.232 sys 0m0.445s 00:14:58.232 19:17:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:58.232 19:17:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:58.232 ************************************ 00:14:58.232 END TEST locking_overlapped_coremask 00:14:58.232 ************************************ 00:14:58.232 19:17:07 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:14:58.232 19:17:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:58.232 19:17:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:58.232 19:17:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:58.232 ************************************ 00:14:58.232 START TEST locking_overlapped_coremask_via_rpc 00:14:58.232 ************************************ 00:14:58.232 19:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:14:58.232 19:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59189 00:14:58.232 19:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59189 /var/tmp/spdk.sock 00:14:58.232 19:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:14:58.232 19:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59189 ']' 00:14:58.232 19:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.232 19:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:58.232 19:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.232 19:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:58.232 19:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.232 [2024-10-17 19:17:07.474271] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:14:58.232 [2024-10-17 19:17:07.474389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59189 ] 00:14:58.491 [2024-10-17 19:17:07.606442] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:14:58.491 [2024-10-17 19:17:07.606528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:58.491 [2024-10-17 19:17:07.674169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.491 [2024-10-17 19:17:07.674291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.491 [2024-10-17 19:17:07.674295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.750 [2024-10-17 19:17:07.747591] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:58.750 19:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:58.750 19:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:14:58.750 19:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59194 00:14:58.750 19:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:14:58.750 19:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59194 /var/tmp/spdk2.sock 00:14:58.750 19:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59194 ']' 00:14:58.750 19:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:58.750 19:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:58.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:58.750 19:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:58.750 19:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:58.750 19:17:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.007 [2024-10-17 19:17:08.044071] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:14:59.007 [2024-10-17 19:17:08.044272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59194 ] 00:14:59.007 [2024-10-17 19:17:08.199831] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:14:59.007 [2024-10-17 19:17:08.199890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:59.265 [2024-10-17 19:17:08.335628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:59.265 [2024-10-17 19:17:08.335676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:59.266 [2024-10-17 19:17:08.335673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:59.266 [2024-10-17 19:17:08.472309] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:00.197 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:00.197 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:15:00.197 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:15:00.197 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.197 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.197 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.197 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:15:00.197 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:00.197 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:15:00.197 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:00.197 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:00.197 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:00.197 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:00.197 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:15:00.197 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.197 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.197 [2024-10-17 19:17:09.199271] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59189 has claimed it. 00:15:00.197 request: 00:15:00.197 { 00:15:00.197 "method": "framework_enable_cpumask_locks", 00:15:00.197 "req_id": 1 00:15:00.197 } 00:15:00.197 Got JSON-RPC error response 00:15:00.197 response: 00:15:00.197 { 00:15:00.197 "code": -32603, 00:15:00.197 "message": "Failed to claim CPU core: 2" 00:15:00.197 } 00:15:00.197 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:00.197 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:00.197 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:00.197 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:00.197 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:00.197 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59189 /var/tmp/spdk.sock 00:15:00.197 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59189 ']' 00:15:00.197 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.197 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:00.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.197 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.197 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:00.197 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.454 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:00.454 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:15:00.454 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59194 /var/tmp/spdk2.sock 00:15:00.454 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59194 ']' 00:15:00.454 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:00.454 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:00.454 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:00.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:00.454 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:00.454 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:01.019 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:01.019 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:15:01.019 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:15:01.019 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:15:01.019 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:15:01.019 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:15:01.019 00:15:01.019 real 0m2.582s 00:15:01.019 user 0m1.576s 00:15:01.019 sys 0m0.224s 00:15:01.019 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:01.019 19:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:01.019 ************************************ 00:15:01.019 END TEST locking_overlapped_coremask_via_rpc 00:15:01.019 ************************************ 00:15:01.019 19:17:10 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:15:01.019 19:17:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59189 ]] 00:15:01.019 19:17:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59189 00:15:01.019 19:17:10 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59189 ']' 00:15:01.019 19:17:10 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59189 00:15:01.019 19:17:10 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:15:01.019 19:17:10 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:01.019 19:17:10 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59189 00:15:01.019 19:17:10 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:01.019 19:17:10 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:01.019 killing process with pid 59189 00:15:01.019 19:17:10 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59189' 00:15:01.019 19:17:10 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59189 00:15:01.019 19:17:10 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59189 00:15:01.297 19:17:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59194 ]] 00:15:01.297 19:17:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59194 00:15:01.297 19:17:10 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59194 ']' 00:15:01.297 19:17:10 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59194 00:15:01.297 19:17:10 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:15:01.297 19:17:10 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:01.297 19:17:10 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59194 00:15:01.297 19:17:10 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:01.297 19:17:10 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:01.297 killing process with pid 59194 00:15:01.297 19:17:10 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59194' 00:15:01.297 19:17:10 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59194 00:15:01.297 19:17:10 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59194 00:15:01.870 19:17:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:15:01.870 19:17:10 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:15:01.870 19:17:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59189 ]] 00:15:01.870 19:17:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59189 00:15:01.870 19:17:10 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59189 ']' 00:15:01.870 19:17:10 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59189 00:15:01.870 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59189) - No such process 00:15:01.870 Process with pid 59189 is not found 00:15:01.870 19:17:10 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59189 is not found' 00:15:01.870 19:17:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59194 ]] 00:15:01.870 19:17:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59194 00:15:01.870 19:17:10 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59194 ']' 00:15:01.870 19:17:10 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59194 00:15:01.870 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59194) - No such process 00:15:01.870 Process with pid 59194 is not found 00:15:01.870 19:17:10 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59194 is not found' 00:15:01.870 19:17:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:15:01.870 00:15:01.870 real 0m20.820s 00:15:01.870 user 0m36.963s 00:15:01.870 sys 0m5.971s 00:15:01.870 19:17:10 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:01.870 19:17:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:01.870 ************************************ 00:15:01.870 END TEST cpu_locks 00:15:01.870 ************************************ 00:15:01.870 ************************************ 00:15:01.870 END TEST event 00:15:01.870 ************************************ 00:15:01.870 00:15:01.870 real 0m50.492s 00:15:01.870 user 1m40.306s 00:15:01.870 sys 0m9.846s 00:15:01.870 19:17:10 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:01.870 19:17:10 event -- common/autotest_common.sh@10 -- # set +x 00:15:01.870 19:17:10 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:15:01.870 19:17:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:01.870 19:17:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:01.870 19:17:10 -- common/autotest_common.sh@10 -- # set +x 00:15:01.870 ************************************ 00:15:01.870 START TEST thread 00:15:01.870 ************************************ 00:15:01.870 19:17:10 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:15:01.870 * Looking for test storage... 00:15:01.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:15:01.870 19:17:11 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:01.870 19:17:11 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:15:01.870 19:17:11 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:01.870 19:17:11 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:01.870 19:17:11 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:01.870 19:17:11 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:01.870 19:17:11 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:01.870 19:17:11 thread -- scripts/common.sh@336 -- # IFS=.-: 00:15:01.870 19:17:11 thread -- scripts/common.sh@336 -- # read -ra ver1 00:15:01.870 19:17:11 thread -- scripts/common.sh@337 -- # IFS=.-: 00:15:01.870 19:17:11 thread -- scripts/common.sh@337 -- # read -ra ver2 00:15:01.870 19:17:11 thread -- scripts/common.sh@338 -- # local 'op=<' 00:15:02.128 19:17:11 thread -- scripts/common.sh@340 -- # ver1_l=2 00:15:02.128 19:17:11 thread -- scripts/common.sh@341 -- # ver2_l=1 00:15:02.128 19:17:11 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:02.128 19:17:11 thread -- scripts/common.sh@344 -- # case "$op" in 00:15:02.128 19:17:11 thread -- scripts/common.sh@345 -- # : 1 00:15:02.128 19:17:11 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:02.128 19:17:11 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:02.128 19:17:11 thread -- scripts/common.sh@365 -- # decimal 1 00:15:02.128 19:17:11 thread -- scripts/common.sh@353 -- # local d=1 00:15:02.128 19:17:11 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:02.128 19:17:11 thread -- scripts/common.sh@355 -- # echo 1 00:15:02.128 19:17:11 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:15:02.128 19:17:11 thread -- scripts/common.sh@366 -- # decimal 2 00:15:02.128 19:17:11 thread -- scripts/common.sh@353 -- # local d=2 00:15:02.128 19:17:11 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:02.128 19:17:11 thread -- scripts/common.sh@355 -- # echo 2 00:15:02.128 19:17:11 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:15:02.128 19:17:11 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:02.128 19:17:11 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:02.128 19:17:11 thread -- scripts/common.sh@368 -- # return 0 00:15:02.128 19:17:11 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:02.128 19:17:11 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:02.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.128 --rc genhtml_branch_coverage=1 00:15:02.128 --rc genhtml_function_coverage=1 00:15:02.128 --rc genhtml_legend=1 00:15:02.128 --rc geninfo_all_blocks=1 00:15:02.128 --rc geninfo_unexecuted_blocks=1 00:15:02.128 00:15:02.128 ' 00:15:02.128 19:17:11 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:02.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.128 --rc genhtml_branch_coverage=1 00:15:02.128 --rc genhtml_function_coverage=1 00:15:02.128 --rc genhtml_legend=1 00:15:02.128 --rc geninfo_all_blocks=1 00:15:02.128 --rc geninfo_unexecuted_blocks=1 00:15:02.128 00:15:02.128 ' 00:15:02.128 19:17:11 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:02.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.128 --rc genhtml_branch_coverage=1 00:15:02.128 --rc genhtml_function_coverage=1 00:15:02.128 --rc genhtml_legend=1 00:15:02.128 --rc geninfo_all_blocks=1 00:15:02.128 --rc geninfo_unexecuted_blocks=1 00:15:02.128 00:15:02.128 ' 00:15:02.128 19:17:11 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:02.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.128 --rc genhtml_branch_coverage=1 00:15:02.128 --rc genhtml_function_coverage=1 00:15:02.128 --rc genhtml_legend=1 00:15:02.128 --rc geninfo_all_blocks=1 00:15:02.128 --rc geninfo_unexecuted_blocks=1 00:15:02.128 00:15:02.128 ' 00:15:02.128 19:17:11 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:15:02.128 19:17:11 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:15:02.128 19:17:11 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:02.128 19:17:11 thread -- common/autotest_common.sh@10 -- # set +x 00:15:02.128 ************************************ 00:15:02.128 START TEST thread_poller_perf 00:15:02.128 ************************************ 00:15:02.128 19:17:11 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:15:02.128 [2024-10-17 19:17:11.161784] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:02.128 [2024-10-17 19:17:11.162860] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59333 ] 00:15:02.128 [2024-10-17 19:17:11.296969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.128 [2024-10-17 19:17:11.357864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.128 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:15:03.501 [2024-10-17T19:17:12.759Z] ====================================== 00:15:03.501 [2024-10-17T19:17:12.759Z] busy:2208604558 (cyc) 00:15:03.501 [2024-10-17T19:17:12.759Z] total_run_count: 312000 00:15:03.501 [2024-10-17T19:17:12.759Z] tsc_hz: 2200000000 (cyc) 00:15:03.501 [2024-10-17T19:17:12.759Z] ====================================== 00:15:03.501 [2024-10-17T19:17:12.759Z] poller_cost: 7078 (cyc), 3217 (nsec) 00:15:03.501 00:15:03.501 ************************************ 00:15:03.501 END TEST thread_poller_perf 00:15:03.502 ************************************ 00:15:03.502 real 0m1.269s 00:15:03.502 user 0m1.117s 00:15:03.502 sys 0m0.043s 00:15:03.502 19:17:12 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:03.502 19:17:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:15:03.502 19:17:12 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:15:03.502 19:17:12 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:15:03.502 19:17:12 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:03.502 19:17:12 thread -- common/autotest_common.sh@10 -- # set +x 00:15:03.502 ************************************ 00:15:03.502 START TEST thread_poller_perf 00:15:03.502 ************************************ 00:15:03.502 19:17:12 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:15:03.502 [2024-10-17 19:17:12.485172] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:03.502 [2024-10-17 19:17:12.485270] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59368 ] 00:15:03.502 [2024-10-17 19:17:12.624195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.502 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:15:03.502 [2024-10-17 19:17:12.709085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.876 [2024-10-17T19:17:14.134Z] ====================================== 00:15:04.876 [2024-10-17T19:17:14.134Z] busy:2202750649 (cyc) 00:15:04.876 [2024-10-17T19:17:14.134Z] total_run_count: 3896000 00:15:04.876 [2024-10-17T19:17:14.134Z] tsc_hz: 2200000000 (cyc) 00:15:04.876 [2024-10-17T19:17:14.134Z] ====================================== 00:15:04.876 [2024-10-17T19:17:14.134Z] poller_cost: 565 (cyc), 256 (nsec) 00:15:04.876 ************************************ 00:15:04.876 END TEST thread_poller_perf 00:15:04.876 ************************************ 00:15:04.876 00:15:04.876 real 0m1.309s 00:15:04.876 user 0m1.151s 00:15:04.876 sys 0m0.050s 00:15:04.876 19:17:13 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:04.876 19:17:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:15:04.876 19:17:13 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:15:04.876 ************************************ 00:15:04.876 END TEST thread 00:15:04.876 ************************************ 00:15:04.876 00:15:04.876 real 0m2.843s 00:15:04.876 user 0m2.388s 00:15:04.876 sys 0m0.243s 00:15:04.876 19:17:13 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:04.876 19:17:13 thread -- common/autotest_common.sh@10 -- # set +x 00:15:04.876 19:17:13 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:15:04.876 19:17:13 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:15:04.876 19:17:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:04.876 19:17:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:04.876 19:17:13 -- common/autotest_common.sh@10 -- # set +x 00:15:04.876 ************************************ 00:15:04.876 START TEST app_cmdline 00:15:04.876 ************************************ 00:15:04.876 19:17:13 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:15:04.876 * Looking for test storage... 00:15:04.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:15:04.876 19:17:13 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:04.876 19:17:13 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:15:04.876 19:17:13 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:04.876 19:17:14 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:04.876 19:17:14 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:04.876 19:17:14 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:04.876 19:17:14 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:04.876 19:17:14 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:15:04.876 19:17:14 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:15:04.876 19:17:14 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:15:04.876 19:17:14 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:15:04.876 19:17:14 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:15:04.876 19:17:14 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:15:04.876 19:17:14 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:15:04.876 19:17:14 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:04.876 19:17:14 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:15:04.876 19:17:14 app_cmdline -- scripts/common.sh@345 -- # : 1 00:15:04.876 19:17:14 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:04.876 19:17:14 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:04.876 19:17:14 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:15:04.876 19:17:14 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:15:04.876 19:17:14 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:04.876 19:17:14 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:15:04.876 19:17:14 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:15:04.876 19:17:14 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:15:04.876 19:17:14 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:15:04.876 19:17:14 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:04.876 19:17:14 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:15:04.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.876 19:17:14 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:15:04.876 19:17:14 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:04.876 19:17:14 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:04.876 19:17:14 app_cmdline -- scripts/common.sh@368 -- # return 0 00:15:04.876 19:17:14 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:04.876 19:17:14 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:04.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.876 --rc genhtml_branch_coverage=1 00:15:04.876 --rc genhtml_function_coverage=1 00:15:04.876 --rc genhtml_legend=1 00:15:04.876 --rc geninfo_all_blocks=1 00:15:04.876 --rc geninfo_unexecuted_blocks=1 00:15:04.876 00:15:04.876 ' 00:15:04.876 19:17:14 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:04.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.876 --rc genhtml_branch_coverage=1 00:15:04.876 --rc genhtml_function_coverage=1 00:15:04.876 --rc genhtml_legend=1 00:15:04.876 --rc geninfo_all_blocks=1 00:15:04.877 --rc geninfo_unexecuted_blocks=1 00:15:04.877 00:15:04.877 ' 00:15:04.877 19:17:14 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:04.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.877 --rc genhtml_branch_coverage=1 00:15:04.877 --rc genhtml_function_coverage=1 00:15:04.877 --rc genhtml_legend=1 00:15:04.877 --rc geninfo_all_blocks=1 00:15:04.877 --rc geninfo_unexecuted_blocks=1 00:15:04.877 00:15:04.877 ' 00:15:04.877 19:17:14 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:04.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.877 --rc genhtml_branch_coverage=1 00:15:04.877 --rc genhtml_function_coverage=1 00:15:04.877 --rc genhtml_legend=1 00:15:04.877 --rc geninfo_all_blocks=1 00:15:04.877 --rc geninfo_unexecuted_blocks=1 00:15:04.877 00:15:04.877 ' 00:15:04.877 19:17:14 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:15:04.877 19:17:14 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59451 00:15:04.877 19:17:14 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:15:04.877 19:17:14 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59451 00:15:04.877 19:17:14 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 59451 ']' 00:15:04.877 19:17:14 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.877 19:17:14 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:04.877 19:17:14 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.877 19:17:14 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:04.877 19:17:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:15:05.136 [2024-10-17 19:17:14.131549] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:05.136 [2024-10-17 19:17:14.132061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59451 ] 00:15:05.136 [2024-10-17 19:17:14.278338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.136 [2024-10-17 19:17:14.356801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.394 [2024-10-17 19:17:14.451780] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:05.960 19:17:15 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:05.960 19:17:15 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:15:05.960 19:17:15 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:15:06.219 { 00:15:06.219 "version": "SPDK v25.01-pre git sha1 006f950ff", 00:15:06.219 "fields": { 00:15:06.219 "major": 25, 00:15:06.219 "minor": 1, 00:15:06.219 "patch": 0, 00:15:06.219 "suffix": "-pre", 00:15:06.219 "commit": "006f950ff" 00:15:06.219 } 00:15:06.219 } 00:15:06.219 19:17:15 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:15:06.219 19:17:15 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:15:06.219 19:17:15 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:15:06.219 19:17:15 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:15:06.219 19:17:15 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:15:06.219 19:17:15 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:15:06.219 19:17:15 app_cmdline -- app/cmdline.sh@26 -- # sort 00:15:06.219 19:17:15 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.219 19:17:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:15:06.219 19:17:15 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.590 19:17:15 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:15:06.590 19:17:15 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:15:06.590 19:17:15 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:15:06.590 19:17:15 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:15:06.590 19:17:15 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:15:06.590 19:17:15 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:06.590 19:17:15 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:06.590 19:17:15 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:06.590 19:17:15 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:06.590 19:17:15 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:06.590 19:17:15 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:06.590 19:17:15 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:06.590 19:17:15 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:06.590 19:17:15 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:15:06.590 request: 00:15:06.590 { 00:15:06.590 "method": "env_dpdk_get_mem_stats", 00:15:06.590 "req_id": 1 00:15:06.590 } 00:15:06.590 Got JSON-RPC error response 00:15:06.590 response: 00:15:06.590 { 00:15:06.590 "code": -32601, 00:15:06.590 "message": "Method not found" 00:15:06.590 } 00:15:06.590 19:17:15 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:15:06.590 19:17:15 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:06.590 19:17:15 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:06.590 19:17:15 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:06.590 19:17:15 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59451 00:15:06.590 19:17:15 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 59451 ']' 00:15:06.590 19:17:15 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 59451 00:15:06.590 19:17:15 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:15:06.590 19:17:15 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:06.590 19:17:15 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59451 00:15:06.590 19:17:15 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:06.590 killing process with pid 59451 00:15:06.590 19:17:15 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:06.590 19:17:15 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59451' 00:15:06.590 19:17:15 app_cmdline -- common/autotest_common.sh@969 -- # kill 59451 00:15:06.590 19:17:15 app_cmdline -- common/autotest_common.sh@974 -- # wait 59451 00:15:07.156 00:15:07.156 real 0m2.428s 00:15:07.156 user 0m2.948s 00:15:07.156 sys 0m0.577s 00:15:07.156 19:17:16 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:07.156 19:17:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:15:07.156 ************************************ 00:15:07.156 END TEST app_cmdline 00:15:07.156 ************************************ 00:15:07.156 19:17:16 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:15:07.156 19:17:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:07.156 19:17:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:07.156 19:17:16 -- common/autotest_common.sh@10 -- # set +x 00:15:07.156 ************************************ 00:15:07.156 START TEST version 00:15:07.156 ************************************ 00:15:07.156 19:17:16 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:15:07.156 * Looking for test storage... 00:15:07.414 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:15:07.414 19:17:16 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:07.414 19:17:16 version -- common/autotest_common.sh@1691 -- # lcov --version 00:15:07.414 19:17:16 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:07.414 19:17:16 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:07.414 19:17:16 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:07.414 19:17:16 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:07.414 19:17:16 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:07.414 19:17:16 version -- scripts/common.sh@336 -- # IFS=.-: 00:15:07.414 19:17:16 version -- scripts/common.sh@336 -- # read -ra ver1 00:15:07.414 19:17:16 version -- scripts/common.sh@337 -- # IFS=.-: 00:15:07.414 19:17:16 version -- scripts/common.sh@337 -- # read -ra ver2 00:15:07.414 19:17:16 version -- scripts/common.sh@338 -- # local 'op=<' 00:15:07.414 19:17:16 version -- scripts/common.sh@340 -- # ver1_l=2 00:15:07.414 19:17:16 version -- scripts/common.sh@341 -- # ver2_l=1 00:15:07.414 19:17:16 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:07.414 19:17:16 version -- scripts/common.sh@344 -- # case "$op" in 00:15:07.414 19:17:16 version -- scripts/common.sh@345 -- # : 1 00:15:07.414 19:17:16 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:07.414 19:17:16 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:07.414 19:17:16 version -- scripts/common.sh@365 -- # decimal 1 00:15:07.414 19:17:16 version -- scripts/common.sh@353 -- # local d=1 00:15:07.414 19:17:16 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:07.414 19:17:16 version -- scripts/common.sh@355 -- # echo 1 00:15:07.414 19:17:16 version -- scripts/common.sh@365 -- # ver1[v]=1 00:15:07.414 19:17:16 version -- scripts/common.sh@366 -- # decimal 2 00:15:07.414 19:17:16 version -- scripts/common.sh@353 -- # local d=2 00:15:07.414 19:17:16 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:07.414 19:17:16 version -- scripts/common.sh@355 -- # echo 2 00:15:07.414 19:17:16 version -- scripts/common.sh@366 -- # ver2[v]=2 00:15:07.414 19:17:16 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:07.414 19:17:16 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:07.414 19:17:16 version -- scripts/common.sh@368 -- # return 0 00:15:07.414 19:17:16 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:07.414 19:17:16 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:07.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.414 --rc genhtml_branch_coverage=1 00:15:07.414 --rc genhtml_function_coverage=1 00:15:07.414 --rc genhtml_legend=1 00:15:07.414 --rc geninfo_all_blocks=1 00:15:07.414 --rc geninfo_unexecuted_blocks=1 00:15:07.415 00:15:07.415 ' 00:15:07.415 19:17:16 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:07.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.415 --rc genhtml_branch_coverage=1 00:15:07.415 --rc genhtml_function_coverage=1 00:15:07.415 --rc genhtml_legend=1 00:15:07.415 --rc geninfo_all_blocks=1 00:15:07.415 --rc geninfo_unexecuted_blocks=1 00:15:07.415 00:15:07.415 ' 00:15:07.415 19:17:16 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:07.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.415 --rc genhtml_branch_coverage=1 00:15:07.415 --rc genhtml_function_coverage=1 00:15:07.415 --rc genhtml_legend=1 00:15:07.415 --rc geninfo_all_blocks=1 00:15:07.415 --rc geninfo_unexecuted_blocks=1 00:15:07.415 00:15:07.415 ' 00:15:07.415 19:17:16 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:07.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.415 --rc genhtml_branch_coverage=1 00:15:07.415 --rc genhtml_function_coverage=1 00:15:07.415 --rc genhtml_legend=1 00:15:07.415 --rc geninfo_all_blocks=1 00:15:07.415 --rc geninfo_unexecuted_blocks=1 00:15:07.415 00:15:07.415 ' 00:15:07.415 19:17:16 version -- app/version.sh@17 -- # get_header_version major 00:15:07.415 19:17:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:15:07.415 19:17:16 version -- app/version.sh@14 -- # cut -f2 00:15:07.415 19:17:16 version -- app/version.sh@14 -- # tr -d '"' 00:15:07.415 19:17:16 version -- app/version.sh@17 -- # major=25 00:15:07.415 19:17:16 version -- app/version.sh@18 -- # get_header_version minor 00:15:07.415 19:17:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:15:07.415 19:17:16 version -- app/version.sh@14 -- # cut -f2 00:15:07.415 19:17:16 version -- app/version.sh@14 -- # tr -d '"' 00:15:07.415 19:17:16 version -- app/version.sh@18 -- # minor=1 00:15:07.415 19:17:16 version -- app/version.sh@19 -- # get_header_version patch 00:15:07.415 19:17:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:15:07.415 19:17:16 version -- app/version.sh@14 -- # cut -f2 00:15:07.415 19:17:16 version -- app/version.sh@14 -- # tr -d '"' 00:15:07.415 19:17:16 version -- app/version.sh@19 -- # patch=0 00:15:07.415 19:17:16 version -- app/version.sh@20 -- # get_header_version suffix 00:15:07.415 19:17:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:15:07.415 19:17:16 version -- app/version.sh@14 -- # cut -f2 00:15:07.415 19:17:16 version -- app/version.sh@14 -- # tr -d '"' 00:15:07.415 19:17:16 version -- app/version.sh@20 -- # suffix=-pre 00:15:07.415 19:17:16 version -- app/version.sh@22 -- # version=25.1 00:15:07.415 19:17:16 version -- app/version.sh@25 -- # (( patch != 0 )) 00:15:07.415 19:17:16 version -- app/version.sh@28 -- # version=25.1rc0 00:15:07.415 19:17:16 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:15:07.415 19:17:16 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:15:07.415 19:17:16 version -- app/version.sh@30 -- # py_version=25.1rc0 00:15:07.415 19:17:16 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:15:07.415 00:15:07.415 real 0m0.244s 00:15:07.415 user 0m0.164s 00:15:07.415 sys 0m0.117s 00:15:07.415 19:17:16 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:07.415 ************************************ 00:15:07.415 19:17:16 version -- common/autotest_common.sh@10 -- # set +x 00:15:07.415 END TEST version 00:15:07.415 ************************************ 00:15:07.415 19:17:16 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:15:07.415 19:17:16 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:15:07.415 19:17:16 -- spdk/autotest.sh@194 -- # uname -s 00:15:07.415 19:17:16 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:15:07.415 19:17:16 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:15:07.415 19:17:16 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:15:07.415 19:17:16 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:15:07.415 19:17:16 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:15:07.415 19:17:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:07.415 19:17:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:07.415 19:17:16 -- common/autotest_common.sh@10 -- # set +x 00:15:07.415 ************************************ 00:15:07.415 START TEST spdk_dd 00:15:07.415 ************************************ 00:15:07.415 19:17:16 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:15:07.673 * Looking for test storage... 00:15:07.673 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:15:07.673 19:17:16 spdk_dd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:07.673 19:17:16 spdk_dd -- common/autotest_common.sh@1691 -- # lcov --version 00:15:07.673 19:17:16 spdk_dd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:07.673 19:17:16 spdk_dd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:07.673 19:17:16 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:07.673 19:17:16 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:07.673 19:17:16 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:07.673 19:17:16 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:15:07.673 19:17:16 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:15:07.673 19:17:16 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:15:07.673 19:17:16 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:15:07.673 19:17:16 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:15:07.673 19:17:16 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:15:07.673 19:17:16 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:15:07.673 19:17:16 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:07.673 19:17:16 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:15:07.673 19:17:16 spdk_dd -- scripts/common.sh@345 -- # : 1 00:15:07.673 19:17:16 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:07.673 19:17:16 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:07.673 19:17:16 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:15:07.673 19:17:16 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:15:07.673 19:17:16 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:07.673 19:17:16 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:15:07.673 19:17:16 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:15:07.673 19:17:16 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:15:07.673 19:17:16 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:15:07.673 19:17:16 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:07.673 19:17:16 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:15:07.673 19:17:16 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:15:07.673 19:17:16 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:07.673 19:17:16 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:07.673 19:17:16 spdk_dd -- scripts/common.sh@368 -- # return 0 00:15:07.673 19:17:16 spdk_dd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:07.673 19:17:16 spdk_dd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:07.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.673 --rc genhtml_branch_coverage=1 00:15:07.673 --rc genhtml_function_coverage=1 00:15:07.673 --rc genhtml_legend=1 00:15:07.673 --rc geninfo_all_blocks=1 00:15:07.673 --rc geninfo_unexecuted_blocks=1 00:15:07.673 00:15:07.673 ' 00:15:07.673 19:17:16 spdk_dd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:07.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.673 --rc genhtml_branch_coverage=1 00:15:07.673 --rc genhtml_function_coverage=1 00:15:07.673 --rc genhtml_legend=1 00:15:07.673 --rc geninfo_all_blocks=1 00:15:07.673 --rc geninfo_unexecuted_blocks=1 00:15:07.673 00:15:07.673 ' 00:15:07.673 19:17:16 spdk_dd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:07.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.673 --rc genhtml_branch_coverage=1 00:15:07.673 --rc genhtml_function_coverage=1 00:15:07.673 --rc genhtml_legend=1 00:15:07.673 --rc geninfo_all_blocks=1 00:15:07.673 --rc geninfo_unexecuted_blocks=1 00:15:07.673 00:15:07.673 ' 00:15:07.673 19:17:16 spdk_dd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:07.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.674 --rc genhtml_branch_coverage=1 00:15:07.674 --rc genhtml_function_coverage=1 00:15:07.674 --rc genhtml_legend=1 00:15:07.674 --rc geninfo_all_blocks=1 00:15:07.674 --rc geninfo_unexecuted_blocks=1 00:15:07.674 00:15:07.674 ' 00:15:07.674 19:17:16 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:07.674 19:17:16 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:15:07.674 19:17:16 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.674 19:17:16 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.674 19:17:16 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.674 19:17:16 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.674 19:17:16 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.674 19:17:16 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.674 19:17:16 spdk_dd -- paths/export.sh@5 -- # export PATH 00:15:07.674 19:17:16 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.674 19:17:16 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:07.932 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:07.932 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:07.932 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:08.191 19:17:17 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:15:08.191 19:17:17 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@233 -- # local class 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@235 -- # local progif 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@236 -- # class=01 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@18 -- # local i 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@27 -- # return 0 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@18 -- # local i 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@27 -- # return 0 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:15:08.191 19:17:17 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:15:08.191 19:17:17 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@139 -- # local lib 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:15:08.191 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.1.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.2 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:15:08.192 * spdk_dd linked to liburing 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:15:08.192 19:17:17 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=n 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=y 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:15:08.192 19:17:17 spdk_dd -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:15:08.193 19:17:17 spdk_dd -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:15:08.193 19:17:17 spdk_dd -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:15:08.193 19:17:17 spdk_dd -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:15:08.193 19:17:17 spdk_dd -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:15:08.193 19:17:17 spdk_dd -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:15:08.193 19:17:17 spdk_dd -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:15:08.193 19:17:17 spdk_dd -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:15:08.193 19:17:17 spdk_dd -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:15:08.193 19:17:17 spdk_dd -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:15:08.193 19:17:17 spdk_dd -- common/build_config.sh@75 -- # CONFIG_FC=n 00:15:08.193 19:17:17 spdk_dd -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:15:08.193 19:17:17 spdk_dd -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:15:08.193 19:17:17 spdk_dd -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:15:08.193 19:17:17 spdk_dd -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:15:08.193 19:17:17 spdk_dd -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:15:08.193 19:17:17 spdk_dd -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:15:08.193 19:17:17 spdk_dd -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:15:08.193 19:17:17 spdk_dd -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:15:08.193 19:17:17 spdk_dd -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:15:08.193 19:17:17 spdk_dd -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:15:08.193 19:17:17 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:15:08.193 19:17:17 spdk_dd -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:15:08.193 19:17:17 spdk_dd -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:15:08.193 19:17:17 spdk_dd -- common/build_config.sh@89 -- # CONFIG_URING=y 00:15:08.193 19:17:17 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:15:08.193 19:17:17 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:15:08.193 19:17:17 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:15:08.193 19:17:17 spdk_dd -- dd/common.sh@153 -- # return 0 00:15:08.193 19:17:17 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:15:08.193 19:17:17 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:15:08.193 19:17:17 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:08.193 19:17:17 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:08.193 19:17:17 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:15:08.193 ************************************ 00:15:08.193 START TEST spdk_dd_basic_rw 00:15:08.193 ************************************ 00:15:08.193 19:17:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:15:08.193 * Looking for test storage... 00:15:08.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:15:08.193 19:17:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:08.193 19:17:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # lcov --version 00:15:08.193 19:17:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:08.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.454 --rc genhtml_branch_coverage=1 00:15:08.454 --rc genhtml_function_coverage=1 00:15:08.454 --rc genhtml_legend=1 00:15:08.454 --rc geninfo_all_blocks=1 00:15:08.454 --rc geninfo_unexecuted_blocks=1 00:15:08.454 00:15:08.454 ' 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:08.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.454 --rc genhtml_branch_coverage=1 00:15:08.454 --rc genhtml_function_coverage=1 00:15:08.454 --rc genhtml_legend=1 00:15:08.454 --rc geninfo_all_blocks=1 00:15:08.454 --rc geninfo_unexecuted_blocks=1 00:15:08.454 00:15:08.454 ' 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:08.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.454 --rc genhtml_branch_coverage=1 00:15:08.454 --rc genhtml_function_coverage=1 00:15:08.454 --rc genhtml_legend=1 00:15:08.454 --rc geninfo_all_blocks=1 00:15:08.454 --rc geninfo_unexecuted_blocks=1 00:15:08.454 00:15:08.454 ' 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:08.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.454 --rc genhtml_branch_coverage=1 00:15:08.454 --rc genhtml_function_coverage=1 00:15:08.454 --rc genhtml_legend=1 00:15:08.454 --rc geninfo_all_blocks=1 00:15:08.454 --rc geninfo_unexecuted_blocks=1 00:15:08.454 00:15:08.454 ' 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.454 19:17:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:15:08.455 19:17:17 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.455 19:17:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:15:08.455 19:17:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:15:08.455 19:17:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:15:08.455 19:17:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:15:08.455 19:17:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:15:08.455 19:17:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:15:08.455 19:17:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:15:08.455 19:17:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:08.455 19:17:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:08.455 19:17:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:15:08.455 19:17:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:15:08.455 19:17:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:15:08.455 19:17:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:15:08.455 19:17:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 3 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:15:08.455 19:17:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:15:08.456 19:17:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 3 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:15:08.456 19:17:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:15:08.456 19:17:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:15:08.456 19:17:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:15:08.456 19:17:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:15:08.456 19:17:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:15:08.456 19:17:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:15:08.456 19:17:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:15:08.456 19:17:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:15:08.456 19:17:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:08.456 19:17:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:15:08.456 19:17:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:15:08.456 ************************************ 00:15:08.456 START TEST dd_bs_lt_native_bs 00:15:08.456 ************************************ 00:15:08.456 19:17:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:15:08.456 19:17:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:15:08.456 19:17:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:15:08.456 19:17:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:08.456 19:17:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:08.456 19:17:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:08.456 19:17:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:08.456 19:17:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:08.456 19:17:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:08.456 19:17:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:08.456 19:17:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:08.456 19:17:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:15:08.715 { 00:15:08.715 "subsystems": [ 00:15:08.715 { 00:15:08.715 "subsystem": "bdev", 00:15:08.715 "config": [ 00:15:08.715 { 00:15:08.715 "params": { 00:15:08.715 "trtype": "pcie", 00:15:08.715 "traddr": "0000:00:10.0", 00:15:08.715 "name": "Nvme0" 00:15:08.715 }, 00:15:08.715 "method": "bdev_nvme_attach_controller" 00:15:08.715 }, 00:15:08.715 { 00:15:08.715 "method": "bdev_wait_for_examine" 00:15:08.715 } 00:15:08.715 ] 00:15:08.715 } 00:15:08.715 ] 00:15:08.715 } 00:15:08.715 [2024-10-17 19:17:17.732062] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:08.715 [2024-10-17 19:17:17.732185] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59808 ] 00:15:08.715 [2024-10-17 19:17:17.870152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.715 [2024-10-17 19:17:17.936466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.974 [2024-10-17 19:17:17.990116] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:08.974 [2024-10-17 19:17:18.102879] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:15:08.974 [2024-10-17 19:17:18.102967] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:09.233 [2024-10-17 19:17:18.231580] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:09.233 00:15:09.233 real 0m0.626s 00:15:09.233 user 0m0.410s 00:15:09.233 sys 0m0.166s 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:15:09.233 ************************************ 00:15:09.233 END TEST dd_bs_lt_native_bs 00:15:09.233 ************************************ 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:15:09.233 ************************************ 00:15:09.233 START TEST dd_rw 00:15:09.233 ************************************ 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:15:09.233 19:17:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:09.802 19:17:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:15:09.802 19:17:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:15:09.802 19:17:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:15:09.802 19:17:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:09.802 [2024-10-17 19:17:19.050519] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:09.802 [2024-10-17 19:17:19.050635] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59839 ] 00:15:10.062 { 00:15:10.062 "subsystems": [ 00:15:10.062 { 00:15:10.062 "subsystem": "bdev", 00:15:10.062 "config": [ 00:15:10.062 { 00:15:10.062 "params": { 00:15:10.062 "trtype": "pcie", 00:15:10.062 "traddr": "0000:00:10.0", 00:15:10.062 "name": "Nvme0" 00:15:10.062 }, 00:15:10.062 "method": "bdev_nvme_attach_controller" 00:15:10.062 }, 00:15:10.062 { 00:15:10.062 "method": "bdev_wait_for_examine" 00:15:10.062 } 00:15:10.062 ] 00:15:10.062 } 00:15:10.062 ] 00:15:10.062 } 00:15:10.062 [2024-10-17 19:17:19.185906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.062 [2024-10-17 19:17:19.255652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.062 [2024-10-17 19:17:19.311767] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:10.321  [2024-10-17T19:17:19.838Z] Copying: 60/60 [kB] (average 29 MBps) 00:15:10.580 00:15:10.581 19:17:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:15:10.581 19:17:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:15:10.581 19:17:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:15:10.581 19:17:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:10.581 { 00:15:10.581 "subsystems": [ 00:15:10.581 { 00:15:10.581 "subsystem": "bdev", 00:15:10.581 "config": [ 00:15:10.581 { 00:15:10.581 "params": { 00:15:10.581 "trtype": "pcie", 00:15:10.581 "traddr": "0000:00:10.0", 00:15:10.581 "name": "Nvme0" 00:15:10.581 }, 00:15:10.581 "method": "bdev_nvme_attach_controller" 00:15:10.581 }, 00:15:10.581 { 00:15:10.581 "method": "bdev_wait_for_examine" 00:15:10.581 } 00:15:10.581 ] 00:15:10.581 } 00:15:10.581 ] 00:15:10.581 } 00:15:10.581 [2024-10-17 19:17:19.684084] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:10.581 [2024-10-17 19:17:19.684242] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59853 ] 00:15:10.581 [2024-10-17 19:17:19.827380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.839 [2024-10-17 19:17:19.891839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.839 [2024-10-17 19:17:19.947502] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:10.839  [2024-10-17T19:17:20.355Z] Copying: 60/60 [kB] (average 19 MBps) 00:15:11.097 00:15:11.097 19:17:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:11.098 19:17:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:15:11.098 19:17:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:15:11.098 19:17:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:15:11.098 19:17:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:15:11.098 19:17:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:15:11.098 19:17:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:15:11.098 19:17:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:15:11.098 19:17:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:15:11.098 19:17:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:15:11.098 19:17:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:11.098 { 00:15:11.098 "subsystems": [ 00:15:11.098 { 00:15:11.098 "subsystem": "bdev", 00:15:11.098 "config": [ 00:15:11.098 { 00:15:11.098 "params": { 00:15:11.098 "trtype": "pcie", 00:15:11.098 "traddr": "0000:00:10.0", 00:15:11.098 "name": "Nvme0" 00:15:11.098 }, 00:15:11.098 "method": "bdev_nvme_attach_controller" 00:15:11.098 }, 00:15:11.098 { 00:15:11.098 "method": "bdev_wait_for_examine" 00:15:11.098 } 00:15:11.098 ] 00:15:11.098 } 00:15:11.098 ] 00:15:11.098 } 00:15:11.098 [2024-10-17 19:17:20.324462] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:11.098 [2024-10-17 19:17:20.324572] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59868 ] 00:15:11.356 [2024-10-17 19:17:20.464484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.356 [2024-10-17 19:17:20.530990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.356 [2024-10-17 19:17:20.586654] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:11.615  [2024-10-17T19:17:21.132Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:15:11.874 00:15:11.874 19:17:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:15:11.874 19:17:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:15:11.874 19:17:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:15:11.874 19:17:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:15:11.874 19:17:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:15:11.874 19:17:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:15:11.874 19:17:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:12.441 19:17:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:15:12.441 19:17:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:15:12.441 19:17:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:15:12.441 19:17:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:12.441 { 00:15:12.441 "subsystems": [ 00:15:12.441 { 00:15:12.441 "subsystem": "bdev", 00:15:12.441 "config": [ 00:15:12.441 { 00:15:12.441 "params": { 00:15:12.441 "trtype": "pcie", 00:15:12.441 "traddr": "0000:00:10.0", 00:15:12.441 "name": "Nvme0" 00:15:12.441 }, 00:15:12.441 "method": "bdev_nvme_attach_controller" 00:15:12.441 }, 00:15:12.441 { 00:15:12.441 "method": "bdev_wait_for_examine" 00:15:12.441 } 00:15:12.441 ] 00:15:12.441 } 00:15:12.441 ] 00:15:12.441 } 00:15:12.441 [2024-10-17 19:17:21.542980] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:12.442 [2024-10-17 19:17:21.543099] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59889 ] 00:15:12.442 [2024-10-17 19:17:21.679439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.771 [2024-10-17 19:17:21.744508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.771 [2024-10-17 19:17:21.797822] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:12.771  [2024-10-17T19:17:22.287Z] Copying: 60/60 [kB] (average 58 MBps) 00:15:13.029 00:15:13.029 19:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:15:13.029 19:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:15:13.029 19:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:15:13.029 19:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:13.029 { 00:15:13.029 "subsystems": [ 00:15:13.029 { 00:15:13.029 "subsystem": "bdev", 00:15:13.029 "config": [ 00:15:13.029 { 00:15:13.029 "params": { 00:15:13.029 "trtype": "pcie", 00:15:13.029 "traddr": "0000:00:10.0", 00:15:13.029 "name": "Nvme0" 00:15:13.029 }, 00:15:13.029 "method": "bdev_nvme_attach_controller" 00:15:13.029 }, 00:15:13.029 { 00:15:13.029 "method": "bdev_wait_for_examine" 00:15:13.029 } 00:15:13.029 ] 00:15:13.029 } 00:15:13.029 ] 00:15:13.029 } 00:15:13.029 [2024-10-17 19:17:22.176386] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:13.029 [2024-10-17 19:17:22.176519] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59908 ] 00:15:13.288 [2024-10-17 19:17:22.323382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.288 [2024-10-17 19:17:22.396672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.288 [2024-10-17 19:17:22.453808] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:13.546  [2024-10-17T19:17:22.804Z] Copying: 60/60 [kB] (average 58 MBps) 00:15:13.546 00:15:13.546 19:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:13.546 19:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:15:13.546 19:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:15:13.546 19:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:15:13.546 19:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:15:13.546 19:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:15:13.546 19:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:15:13.546 19:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:15:13.546 19:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:15:13.546 19:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:15:13.546 19:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:13.804 { 00:15:13.804 "subsystems": [ 00:15:13.804 { 00:15:13.804 "subsystem": "bdev", 00:15:13.804 "config": [ 00:15:13.804 { 00:15:13.804 "params": { 00:15:13.804 "trtype": "pcie", 00:15:13.804 "traddr": "0000:00:10.0", 00:15:13.804 "name": "Nvme0" 00:15:13.804 }, 00:15:13.804 "method": "bdev_nvme_attach_controller" 00:15:13.804 }, 00:15:13.804 { 00:15:13.804 "method": "bdev_wait_for_examine" 00:15:13.804 } 00:15:13.804 ] 00:15:13.804 } 00:15:13.804 ] 00:15:13.804 } 00:15:13.804 [2024-10-17 19:17:22.832048] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:13.804 [2024-10-17 19:17:22.832223] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59928 ] 00:15:13.804 [2024-10-17 19:17:22.970740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.804 [2024-10-17 19:17:23.037625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.063 [2024-10-17 19:17:23.091055] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:14.063  [2024-10-17T19:17:23.581Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:15:14.323 00:15:14.323 19:17:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:15:14.323 19:17:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:15:14.323 19:17:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:15:14.323 19:17:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:15:14.323 19:17:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:15:14.323 19:17:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:15:14.323 19:17:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:15:14.323 19:17:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:14.890 19:17:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:15:14.890 19:17:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:15:14.890 19:17:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:15:14.890 19:17:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:14.890 [2024-10-17 19:17:24.063251] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:14.890 [2024-10-17 19:17:24.063851] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59948 ] 00:15:14.890 { 00:15:14.890 "subsystems": [ 00:15:14.890 { 00:15:14.890 "subsystem": "bdev", 00:15:14.890 "config": [ 00:15:14.890 { 00:15:14.890 "params": { 00:15:14.890 "trtype": "pcie", 00:15:14.890 "traddr": "0000:00:10.0", 00:15:14.890 "name": "Nvme0" 00:15:14.890 }, 00:15:14.890 "method": "bdev_nvme_attach_controller" 00:15:14.890 }, 00:15:14.890 { 00:15:14.890 "method": "bdev_wait_for_examine" 00:15:14.890 } 00:15:14.890 ] 00:15:14.890 } 00:15:14.890 ] 00:15:14.890 } 00:15:15.149 [2024-10-17 19:17:24.205099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.149 [2024-10-17 19:17:24.269896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.149 [2024-10-17 19:17:24.323821] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:15.407  [2024-10-17T19:17:24.665Z] Copying: 56/56 [kB] (average 54 MBps) 00:15:15.407 00:15:15.407 19:17:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:15:15.407 19:17:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:15:15.407 19:17:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:15:15.407 19:17:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:15.665 { 00:15:15.665 "subsystems": [ 00:15:15.665 { 00:15:15.665 "subsystem": "bdev", 00:15:15.665 "config": [ 00:15:15.665 { 00:15:15.665 "params": { 00:15:15.665 "trtype": "pcie", 00:15:15.665 "traddr": "0000:00:10.0", 00:15:15.665 "name": "Nvme0" 00:15:15.665 }, 00:15:15.665 "method": "bdev_nvme_attach_controller" 00:15:15.666 }, 00:15:15.666 { 00:15:15.666 "method": "bdev_wait_for_examine" 00:15:15.666 } 00:15:15.666 ] 00:15:15.666 } 00:15:15.666 ] 00:15:15.666 } 00:15:15.666 [2024-10-17 19:17:24.691038] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:15.666 [2024-10-17 19:17:24.691164] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59956 ] 00:15:15.666 [2024-10-17 19:17:24.828962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.666 [2024-10-17 19:17:24.896762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.924 [2024-10-17 19:17:24.950501] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:15.924  [2024-10-17T19:17:25.468Z] Copying: 56/56 [kB] (average 27 MBps) 00:15:16.210 00:15:16.210 19:17:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:16.210 19:17:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:15:16.210 19:17:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:15:16.210 19:17:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:15:16.210 19:17:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:15:16.210 19:17:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:15:16.210 19:17:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:15:16.210 19:17:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:15:16.210 19:17:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:15:16.210 19:17:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:15:16.210 19:17:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:16.210 [2024-10-17 19:17:25.325760] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:16.210 [2024-10-17 19:17:25.325869] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59977 ] 00:15:16.210 { 00:15:16.210 "subsystems": [ 00:15:16.210 { 00:15:16.210 "subsystem": "bdev", 00:15:16.210 "config": [ 00:15:16.210 { 00:15:16.210 "params": { 00:15:16.210 "trtype": "pcie", 00:15:16.210 "traddr": "0000:00:10.0", 00:15:16.210 "name": "Nvme0" 00:15:16.210 }, 00:15:16.210 "method": "bdev_nvme_attach_controller" 00:15:16.210 }, 00:15:16.210 { 00:15:16.210 "method": "bdev_wait_for_examine" 00:15:16.210 } 00:15:16.210 ] 00:15:16.210 } 00:15:16.210 ] 00:15:16.210 } 00:15:16.210 [2024-10-17 19:17:25.462800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.469 [2024-10-17 19:17:25.526763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.469 [2024-10-17 19:17:25.580505] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:16.469  [2024-10-17T19:17:25.985Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:15:16.727 00:15:16.727 19:17:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:15:16.727 19:17:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:15:16.727 19:17:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:15:16.727 19:17:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:15:16.727 19:17:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:15:16.727 19:17:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:15:16.727 19:17:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:17.294 19:17:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:15:17.294 19:17:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:15:17.294 19:17:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:15:17.294 19:17:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:17.294 { 00:15:17.294 "subsystems": [ 00:15:17.294 { 00:15:17.294 "subsystem": "bdev", 00:15:17.294 "config": [ 00:15:17.294 { 00:15:17.294 "params": { 00:15:17.294 "trtype": "pcie", 00:15:17.294 "traddr": "0000:00:10.0", 00:15:17.294 "name": "Nvme0" 00:15:17.294 }, 00:15:17.294 "method": "bdev_nvme_attach_controller" 00:15:17.294 }, 00:15:17.294 { 00:15:17.294 "method": "bdev_wait_for_examine" 00:15:17.294 } 00:15:17.294 ] 00:15:17.294 } 00:15:17.294 ] 00:15:17.294 } 00:15:17.294 [2024-10-17 19:17:26.517501] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:17.294 [2024-10-17 19:17:26.517618] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59996 ] 00:15:17.553 [2024-10-17 19:17:26.657495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.553 [2024-10-17 19:17:26.724883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.553 [2024-10-17 19:17:26.779018] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:17.811  [2024-10-17T19:17:27.327Z] Copying: 56/56 [kB] (average 54 MBps) 00:15:18.069 00:15:18.069 19:17:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:15:18.069 19:17:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:15:18.069 19:17:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:15:18.069 19:17:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:18.069 { 00:15:18.069 "subsystems": [ 00:15:18.069 { 00:15:18.069 "subsystem": "bdev", 00:15:18.069 "config": [ 00:15:18.069 { 00:15:18.069 "params": { 00:15:18.069 "trtype": "pcie", 00:15:18.069 "traddr": "0000:00:10.0", 00:15:18.069 "name": "Nvme0" 00:15:18.069 }, 00:15:18.069 "method": "bdev_nvme_attach_controller" 00:15:18.069 }, 00:15:18.069 { 00:15:18.069 "method": "bdev_wait_for_examine" 00:15:18.069 } 00:15:18.069 ] 00:15:18.069 } 00:15:18.069 ] 00:15:18.069 } 00:15:18.069 [2024-10-17 19:17:27.145723] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:18.069 [2024-10-17 19:17:27.145836] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60015 ] 00:15:18.069 [2024-10-17 19:17:27.283460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.327 [2024-10-17 19:17:27.350942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.327 [2024-10-17 19:17:27.404536] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:18.327  [2024-10-17T19:17:27.843Z] Copying: 56/56 [kB] (average 54 MBps) 00:15:18.585 00:15:18.585 19:17:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:18.585 19:17:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:15:18.585 19:17:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:15:18.585 19:17:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:15:18.585 19:17:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:15:18.585 19:17:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:15:18.585 19:17:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:15:18.585 19:17:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:15:18.585 19:17:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:15:18.585 19:17:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:15:18.585 19:17:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:18.585 { 00:15:18.585 "subsystems": [ 00:15:18.585 { 00:15:18.585 "subsystem": "bdev", 00:15:18.585 "config": [ 00:15:18.585 { 00:15:18.585 "params": { 00:15:18.585 "trtype": "pcie", 00:15:18.585 "traddr": "0000:00:10.0", 00:15:18.585 "name": "Nvme0" 00:15:18.585 }, 00:15:18.585 "method": "bdev_nvme_attach_controller" 00:15:18.585 }, 00:15:18.585 { 00:15:18.585 "method": "bdev_wait_for_examine" 00:15:18.585 } 00:15:18.585 ] 00:15:18.585 } 00:15:18.585 ] 00:15:18.585 } 00:15:18.585 [2024-10-17 19:17:27.772273] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:18.585 [2024-10-17 19:17:27.772382] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60025 ] 00:15:18.844 [2024-10-17 19:17:27.914055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.844 [2024-10-17 19:17:27.988585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.844 [2024-10-17 19:17:28.046640] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:19.102  [2024-10-17T19:17:28.360Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:15:19.102 00:15:19.362 19:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:15:19.362 19:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:15:19.362 19:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:15:19.362 19:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:15:19.362 19:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:15:19.362 19:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:15:19.362 19:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:15:19.362 19:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:20.026 19:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:15:20.026 19:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:15:20.026 19:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:15:20.026 19:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:20.026 { 00:15:20.026 "subsystems": [ 00:15:20.026 { 00:15:20.026 "subsystem": "bdev", 00:15:20.026 "config": [ 00:15:20.026 { 00:15:20.026 "params": { 00:15:20.026 "trtype": "pcie", 00:15:20.026 "traddr": "0000:00:10.0", 00:15:20.026 "name": "Nvme0" 00:15:20.026 }, 00:15:20.026 "method": "bdev_nvme_attach_controller" 00:15:20.026 }, 00:15:20.026 { 00:15:20.026 "method": "bdev_wait_for_examine" 00:15:20.026 } 00:15:20.026 ] 00:15:20.026 } 00:15:20.026 ] 00:15:20.026 } 00:15:20.026 [2024-10-17 19:17:28.966080] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:20.026 [2024-10-17 19:17:28.966252] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60050 ] 00:15:20.026 [2024-10-17 19:17:29.110200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.026 [2024-10-17 19:17:29.186572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.026 [2024-10-17 19:17:29.245788] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:20.285  [2024-10-17T19:17:29.801Z] Copying: 48/48 [kB] (average 46 MBps) 00:15:20.543 00:15:20.543 19:17:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:15:20.543 19:17:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:15:20.543 19:17:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:15:20.543 19:17:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:20.543 { 00:15:20.543 "subsystems": [ 00:15:20.543 { 00:15:20.543 "subsystem": "bdev", 00:15:20.543 "config": [ 00:15:20.543 { 00:15:20.543 "params": { 00:15:20.543 "trtype": "pcie", 00:15:20.543 "traddr": "0000:00:10.0", 00:15:20.543 "name": "Nvme0" 00:15:20.543 }, 00:15:20.543 "method": "bdev_nvme_attach_controller" 00:15:20.543 }, 00:15:20.543 { 00:15:20.543 "method": "bdev_wait_for_examine" 00:15:20.543 } 00:15:20.543 ] 00:15:20.543 } 00:15:20.543 ] 00:15:20.543 } 00:15:20.543 [2024-10-17 19:17:29.632335] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:20.543 [2024-10-17 19:17:29.632449] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60063 ] 00:15:20.543 [2024-10-17 19:17:29.772952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.802 [2024-10-17 19:17:29.840083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.802 [2024-10-17 19:17:29.896236] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:20.802  [2024-10-17T19:17:30.319Z] Copying: 48/48 [kB] (average 46 MBps) 00:15:21.061 00:15:21.061 19:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:21.061 19:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:15:21.061 19:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:15:21.061 19:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:15:21.061 19:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:15:21.061 19:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:15:21.061 19:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:15:21.061 19:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:15:21.061 19:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:15:21.061 19:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:15:21.061 19:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:21.061 [2024-10-17 19:17:30.263415] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:21.061 [2024-10-17 19:17:30.263530] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60086 ] 00:15:21.061 { 00:15:21.061 "subsystems": [ 00:15:21.061 { 00:15:21.061 "subsystem": "bdev", 00:15:21.061 "config": [ 00:15:21.061 { 00:15:21.061 "params": { 00:15:21.061 "trtype": "pcie", 00:15:21.061 "traddr": "0000:00:10.0", 00:15:21.061 "name": "Nvme0" 00:15:21.061 }, 00:15:21.061 "method": "bdev_nvme_attach_controller" 00:15:21.061 }, 00:15:21.061 { 00:15:21.061 "method": "bdev_wait_for_examine" 00:15:21.061 } 00:15:21.061 ] 00:15:21.061 } 00:15:21.061 ] 00:15:21.061 } 00:15:21.320 [2024-10-17 19:17:30.400364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.320 [2024-10-17 19:17:30.468839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.320 [2024-10-17 19:17:30.524244] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:21.579  [2024-10-17T19:17:30.837Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:15:21.579 00:15:21.837 19:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:15:21.837 19:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:15:21.837 19:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:15:21.837 19:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:15:21.837 19:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:15:21.837 19:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:15:21.837 19:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:22.096 19:17:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:15:22.096 19:17:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:15:22.096 19:17:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:15:22.096 19:17:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:22.355 { 00:15:22.355 "subsystems": [ 00:15:22.355 { 00:15:22.355 "subsystem": "bdev", 00:15:22.355 "config": [ 00:15:22.355 { 00:15:22.355 "params": { 00:15:22.355 "trtype": "pcie", 00:15:22.355 "traddr": "0000:00:10.0", 00:15:22.355 "name": "Nvme0" 00:15:22.355 }, 00:15:22.355 "method": "bdev_nvme_attach_controller" 00:15:22.355 }, 00:15:22.355 { 00:15:22.355 "method": "bdev_wait_for_examine" 00:15:22.355 } 00:15:22.355 ] 00:15:22.355 } 00:15:22.355 ] 00:15:22.355 } 00:15:22.355 [2024-10-17 19:17:31.364296] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:22.355 [2024-10-17 19:17:31.364407] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60108 ] 00:15:22.355 [2024-10-17 19:17:31.505944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.355 [2024-10-17 19:17:31.582301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.613 [2024-10-17 19:17:31.641951] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:22.613  [2024-10-17T19:17:32.129Z] Copying: 48/48 [kB] (average 46 MBps) 00:15:22.871 00:15:22.871 19:17:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:15:22.871 19:17:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:15:22.871 19:17:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:15:22.871 19:17:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:22.871 [2024-10-17 19:17:31.996725] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:22.871 [2024-10-17 19:17:31.996825] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60122 ] 00:15:22.871 { 00:15:22.871 "subsystems": [ 00:15:22.871 { 00:15:22.871 "subsystem": "bdev", 00:15:22.871 "config": [ 00:15:22.871 { 00:15:22.871 "params": { 00:15:22.871 "trtype": "pcie", 00:15:22.871 "traddr": "0000:00:10.0", 00:15:22.871 "name": "Nvme0" 00:15:22.871 }, 00:15:22.871 "method": "bdev_nvme_attach_controller" 00:15:22.871 }, 00:15:22.871 { 00:15:22.871 "method": "bdev_wait_for_examine" 00:15:22.871 } 00:15:22.871 ] 00:15:22.871 } 00:15:22.871 ] 00:15:22.871 } 00:15:23.129 [2024-10-17 19:17:32.129554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.129 [2024-10-17 19:17:32.199797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.129 [2024-10-17 19:17:32.257451] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:23.129  [2024-10-17T19:17:32.644Z] Copying: 48/48 [kB] (average 46 MBps) 00:15:23.386 00:15:23.386 19:17:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:23.386 19:17:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:15:23.386 19:17:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:15:23.386 19:17:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:15:23.386 19:17:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:15:23.386 19:17:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:15:23.386 19:17:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:15:23.386 19:17:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:15:23.386 19:17:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:15:23.386 19:17:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:15:23.386 19:17:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:23.386 { 00:15:23.386 "subsystems": [ 00:15:23.386 { 00:15:23.386 "subsystem": "bdev", 00:15:23.386 "config": [ 00:15:23.386 { 00:15:23.386 "params": { 00:15:23.386 "trtype": "pcie", 00:15:23.386 "traddr": "0000:00:10.0", 00:15:23.387 "name": "Nvme0" 00:15:23.387 }, 00:15:23.387 "method": "bdev_nvme_attach_controller" 00:15:23.387 }, 00:15:23.387 { 00:15:23.387 "method": "bdev_wait_for_examine" 00:15:23.387 } 00:15:23.387 ] 00:15:23.387 } 00:15:23.387 ] 00:15:23.387 } 00:15:23.387 [2024-10-17 19:17:32.637634] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:23.387 [2024-10-17 19:17:32.637750] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60137 ] 00:15:23.645 [2024-10-17 19:17:32.778072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.645 [2024-10-17 19:17:32.846866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.903 [2024-10-17 19:17:32.904363] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:23.903  [2024-10-17T19:17:33.420Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:15:24.162 00:15:24.162 00:15:24.162 real 0m14.856s 00:15:24.162 user 0m10.803s 00:15:24.162 sys 0m5.541s 00:15:24.162 19:17:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:24.162 ************************************ 00:15:24.162 END TEST dd_rw 00:15:24.162 ************************************ 00:15:24.162 19:17:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:15:24.162 19:17:33 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:15:24.162 19:17:33 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:24.162 19:17:33 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:24.162 19:17:33 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:15:24.162 ************************************ 00:15:24.162 START TEST dd_rw_offset 00:15:24.162 ************************************ 00:15:24.162 19:17:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:15:24.162 19:17:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:15:24.162 19:17:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:15:24.162 19:17:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:15:24.162 19:17:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:15:24.162 19:17:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:15:24.162 19:17:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=imwa4j5nagaoiahu676vacozik5m39hwllqlxai3osnck5ayoaluhpopov8hx53w8eqqhlvo86p3r6codgj70ovmx9anxy3pxy5vax3jwqeook9n8ayev7oxho5ygadrr4i02lq58r2gcvh0n0mr9871mzs3mf995anf0x00xhtvfy0jt1tfvms5dpm1nyiqqmbnetqy7u5mybdpyd9kzwmv631grauiowl4i3ydv4x73gdfdvz62so9hci5a951uqhm76apyfsqzzcudh7k8rp7zrr2vvdmft248oi5yvslw7nm6tcma914earbpffnw514o0urkro0qcsuhy729plzux52v9aabatvvrb87cscq2gw04mwwfhtpgt5cfg2u5it717k39r7jx6iq6lknslbm2jnfdua6b70jcz6mlevijx69g2x8ody3nslxbkn49t069os0s8jio4dm1dt3j05akgnjpal82ie3sjp4umzx4z6k723s0x5t8ed6k1zhjzh99rixcui6qsw33kdwozxsd32tbvayx0mofol4qs0a4omkriuncf7pvcz7cc0p0rk9bd4132mmevcoovrgln7afulo7mlx43fz09t5qtsumlsdh503w36c15kls07925pyifwderk4e7uwbju1cljj58lulicbi0v5mwsf415xrkvaaa0a85uvo63u6cjx540uzutjdv1eq53ppeucwtjvf9fmc14y2wd8xuu9ora3tnuxemz7kp042bb1nta2us9v2gr5kjql0ycrsi83iutswdp7qjyzkzjkhzaumzt8lpns1y6mumq7cjaymxiolthgdxwk85gi6272e04i99huf9oi8hi6z4a62eqg9w6ogmqhx3g7ndu3y2larqevsbld6pwnf03qltylc27g9zuphmhkkdtqr0k7pb6yg3fgwbdjvc78052a6sod2l94wfpxc4klgc9wazotec22ppfanbed3emgw4u3hxe18epcbjhn5o9k59fwi1cdz0tc7pvkksfv8qyi4jo4h8cdi3sjuaby4tvpsfrwt7dz2mwt3on8m9zqpmfax1th7wzp7ik98c3cczhyzbk3feeb6djxsi1aa9aayw7y7yiqj3qgje952o885tgwzoh9i1uo7s4nq675c0d46ib6newswa1n5adqe05zyf87orsbw5ed8dv6887ls6nizh032asrss33jn6pdr5k1monj1d7otm48paqv3dxofkh2kkyrn3cxsc6n2rwf8vaxeycsz3ompql067nt4urg5n5fop13muobcnqabilkjnmmg50e5x3er0rnu7dgn18x399fvfdn245snbyax20wxx6cggo33hu0spsiqlsjrucdoh8yq5bwue95uxpvn6ld7r8no1tpol2vzbg0matg3smojam1vu6r68ltfza323otq8rij7aq7fy7efd34w9gp5pbkcosktosc60abl0phb1wom8qyjgqj22ll8xdbwkkvwuvk6ul7g6t67quhyp12r5gizrtd90z3x77wocxgrbq3v56nj89qq1me6zpyj16e2uqycnxpl6o9ugjrzj63z466r814zibh8acsodzupwduml43xasq9xyl8vdhkgwctecaydacz1q9z2dnum1kvx8nrykxgpib52y68q5950xwo671mo3jos6wp56050sv7omeupeqgxxacg144rnsejijgphx3kjcgqammidqg11su9khbl2sa00ncjdxfruh1c6z6ps22v2vwrkd3l58rs4yq14e7n6b87kp7a72ybgsteulo2wokzqcv0swr9hvs6useco8r2s1z1e8eajd1xlgsj6jlfvhgmzb7xubc3wpavf7l079nq3dzj9zvx64up0s8pwx6sv5u1sy4bx3wcjl6wn43azyypje4d8wawumaql06oqvoqgvqkiyd0quzyxugd97u3in4i8mzg0c7chmm9j8nu3nx8u14x13u8tdjdzfe8t5d6z2f7i1nd0eot8k8ssh5so6ub7ax0xptfn5i7wkxob3tehiaerdd35dkcxkawakdmfc46imffqlqpkv2rabzt8r07o0w7ozyhpvykqlskaw3ogjal61gr9tuq998yujnwe01omk8l8elh6944etwswgltz3mfkpyg1mulhvy9jg3hyho3u48xcsqb02u6eth61f1egx6zn6s04dc0oqmw8fe3v605fnmithjb3fxlm8spkwo2bq9farugv3lh2qmvznztigu2l4v58jbpbj4d308wf2kgy2j2jscjh0q4fpd3x0fvosypofvuwzq1cqi10fn8vkvabwk5jfldnfwc4ca0clgtuf0a2wlu8ejcduj63c8uyzv8diphas4icvpllzbwq22g5tmnjkk66zzshr4iwn9k738lnt394t94heypm1b31paql6cff9xlvi952ddih7oquzss4vx4bf4qs01flo9uvx8oyrw676dmwczo5tueo1qf2xq8i3hkuvfxfhv8adqifyfylhdr9aidirgak23a1nlu4axgge1se1ndy0wl1nxki55h3cmecyycbqjsf84u85rr2nghfe4u4j81td9q1gk4tkku1ad3tjxjc1q6hsskfjib34bkbhq04pw629vqapzoehyl1ks2fakbq6jdllyu60tw56ef6bsjwcuo4xk3ixvvt5aksflc5gr3or7f1y58tlj36vznwpx66xef9p38iz6yyz4y062ez26vgqmilmu9iwe1k0n3njw8u8m73x45r3eh024ss50b2yjbwvwk50xthew6prmisotfljl14ki1qrp58ircttm68vjqqesu2mputvnd1jki48qhjwkdxy0pd22vulwxi3njz3skf0eh4yw1iio35q2sfjy2izsf4ar5ot6m9a6a3o2wt5qrb3hvjuskloxnes9t7mx7mdkgczrd6pzn3wjx6cmvzgew5blvbp3wrh4gn53za888yz0iwswxd7eab80wvgtbdvsxnop8rtlu55186ceo38ga71cpc99pkq7ydmaxy5h57pddwdsbqp2h9o7nd7gkf6ols8zrax4ip136ogcnr7n5nmfknhta4gowbnewemp4pq5v6abk24nkwpot49nb5fhtzhu5rwzud2fuukv1o39f6sxj7gwxczj8pu3eh1isy04cqyutzecl5r2027nobxkzb0qz4gsb6c9z7eq0upgjm69hxtl4gsqlsg7qt1l128s9mxv3kc371enint0w636ldnb26d21x1jb8pyzfpt1rdakhzk2vz2zygjuu0lh7fg96r95d9usryooyzgfdi7b4l9zmia4jpjb13g1f4v1nlwtq9gh7x88iaph07zm0pb55p8w6ozetcxh9bichyfkzjeg5mrz6f8yt3rwx5my2wdouizqvqvj7llwr1q4k1ot85mx9c02a5zgm4hxkz69y3pganpjglnih62h4cpppl58j3ckbti3xpuvh4nzp2w8xcx4rukd5ulfoed2z1yg3f2tqmqo52ye344goa6s9mrub7267bsfjdv25cg59ijk85om8v4vas2hmah2wlmtakyz2md7nvj8d2qtxboegtfvzewtz58z5o9zcuz6qooiz7d70jr5ft5e1w1kecxlsi2gg3pj4orrpyasu28tw8so8g3bjzdmw2pvaz5yr80i1edosuk82ommmf61skg9bdcqybcdhelh1r7xbum20zbp0aoh24ylc3hlmnrd7bltqp0msin5daxyh64uj1vnt5qj2sj5mwzv6i5iawlawaatvmvzypqg2ii2nrdl2s7bjm3t8mh6crvw86zhgange7uol4fxynwmop6b8gd4xxajcmkoctz73zawxjj42tev2xgt6vw33eh58nn05fjhdqgtjany3kiyef336oo6c2xqyhse8bw5u7aj2dcncswzvn2d947baydsqm18d5m35q6s07egzqm3lf742ozvnvhx73kr1wm8fg42hejadz8gli2909s18hx0i5a36rq8rsq5u5emyinvjuytratpajfqt8olkxvsl8zsiy83s3w 00:15:24.162 19:17:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:15:24.162 19:17:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:15:24.162 19:17:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:15:24.162 19:17:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:15:24.162 { 00:15:24.162 "subsystems": [ 00:15:24.162 { 00:15:24.162 "subsystem": "bdev", 00:15:24.162 "config": [ 00:15:24.162 { 00:15:24.162 "params": { 00:15:24.162 "trtype": "pcie", 00:15:24.162 "traddr": "0000:00:10.0", 00:15:24.162 "name": "Nvme0" 00:15:24.162 }, 00:15:24.162 "method": "bdev_nvme_attach_controller" 00:15:24.162 }, 00:15:24.162 { 00:15:24.162 "method": "bdev_wait_for_examine" 00:15:24.162 } 00:15:24.162 ] 00:15:24.162 } 00:15:24.162 ] 00:15:24.162 } 00:15:24.162 [2024-10-17 19:17:33.383766] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:24.162 [2024-10-17 19:17:33.383877] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60173 ] 00:15:24.420 [2024-10-17 19:17:33.524258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.420 [2024-10-17 19:17:33.593599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.420 [2024-10-17 19:17:33.650704] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:24.678  [2024-10-17T19:17:34.194Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:15:24.936 00:15:24.936 19:17:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:15:24.936 19:17:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:15:24.936 19:17:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:15:24.936 19:17:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:15:24.936 [2024-10-17 19:17:34.007999] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:24.936 [2024-10-17 19:17:34.008099] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60186 ] 00:15:24.936 { 00:15:24.936 "subsystems": [ 00:15:24.936 { 00:15:24.936 "subsystem": "bdev", 00:15:24.936 "config": [ 00:15:24.936 { 00:15:24.936 "params": { 00:15:24.936 "trtype": "pcie", 00:15:24.936 "traddr": "0000:00:10.0", 00:15:24.936 "name": "Nvme0" 00:15:24.936 }, 00:15:24.936 "method": "bdev_nvme_attach_controller" 00:15:24.936 }, 00:15:24.936 { 00:15:24.936 "method": "bdev_wait_for_examine" 00:15:24.936 } 00:15:24.936 ] 00:15:24.936 } 00:15:24.936 ] 00:15:24.936 } 00:15:24.936 [2024-10-17 19:17:34.139543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.194 [2024-10-17 19:17:34.207242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.194 [2024-10-17 19:17:34.263600] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:25.194  [2024-10-17T19:17:34.711Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:15:25.453 00:15:25.453 19:17:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:15:25.453 19:17:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ imwa4j5nagaoiahu676vacozik5m39hwllqlxai3osnck5ayoaluhpopov8hx53w8eqqhlvo86p3r6codgj70ovmx9anxy3pxy5vax3jwqeook9n8ayev7oxho5ygadrr4i02lq58r2gcvh0n0mr9871mzs3mf995anf0x00xhtvfy0jt1tfvms5dpm1nyiqqmbnetqy7u5mybdpyd9kzwmv631grauiowl4i3ydv4x73gdfdvz62so9hci5a951uqhm76apyfsqzzcudh7k8rp7zrr2vvdmft248oi5yvslw7nm6tcma914earbpffnw514o0urkro0qcsuhy729plzux52v9aabatvvrb87cscq2gw04mwwfhtpgt5cfg2u5it717k39r7jx6iq6lknslbm2jnfdua6b70jcz6mlevijx69g2x8ody3nslxbkn49t069os0s8jio4dm1dt3j05akgnjpal82ie3sjp4umzx4z6k723s0x5t8ed6k1zhjzh99rixcui6qsw33kdwozxsd32tbvayx0mofol4qs0a4omkriuncf7pvcz7cc0p0rk9bd4132mmevcoovrgln7afulo7mlx43fz09t5qtsumlsdh503w36c15kls07925pyifwderk4e7uwbju1cljj58lulicbi0v5mwsf415xrkvaaa0a85uvo63u6cjx540uzutjdv1eq53ppeucwtjvf9fmc14y2wd8xuu9ora3tnuxemz7kp042bb1nta2us9v2gr5kjql0ycrsi83iutswdp7qjyzkzjkhzaumzt8lpns1y6mumq7cjaymxiolthgdxwk85gi6272e04i99huf9oi8hi6z4a62eqg9w6ogmqhx3g7ndu3y2larqevsbld6pwnf03qltylc27g9zuphmhkkdtqr0k7pb6yg3fgwbdjvc78052a6sod2l94wfpxc4klgc9wazotec22ppfanbed3emgw4u3hxe18epcbjhn5o9k59fwi1cdz0tc7pvkksfv8qyi4jo4h8cdi3sjuaby4tvpsfrwt7dz2mwt3on8m9zqpmfax1th7wzp7ik98c3cczhyzbk3feeb6djxsi1aa9aayw7y7yiqj3qgje952o885tgwzoh9i1uo7s4nq675c0d46ib6newswa1n5adqe05zyf87orsbw5ed8dv6887ls6nizh032asrss33jn6pdr5k1monj1d7otm48paqv3dxofkh2kkyrn3cxsc6n2rwf8vaxeycsz3ompql067nt4urg5n5fop13muobcnqabilkjnmmg50e5x3er0rnu7dgn18x399fvfdn245snbyax20wxx6cggo33hu0spsiqlsjrucdoh8yq5bwue95uxpvn6ld7r8no1tpol2vzbg0matg3smojam1vu6r68ltfza323otq8rij7aq7fy7efd34w9gp5pbkcosktosc60abl0phb1wom8qyjgqj22ll8xdbwkkvwuvk6ul7g6t67quhyp12r5gizrtd90z3x77wocxgrbq3v56nj89qq1me6zpyj16e2uqycnxpl6o9ugjrzj63z466r814zibh8acsodzupwduml43xasq9xyl8vdhkgwctecaydacz1q9z2dnum1kvx8nrykxgpib52y68q5950xwo671mo3jos6wp56050sv7omeupeqgxxacg144rnsejijgphx3kjcgqammidqg11su9khbl2sa00ncjdxfruh1c6z6ps22v2vwrkd3l58rs4yq14e7n6b87kp7a72ybgsteulo2wokzqcv0swr9hvs6useco8r2s1z1e8eajd1xlgsj6jlfvhgmzb7xubc3wpavf7l079nq3dzj9zvx64up0s8pwx6sv5u1sy4bx3wcjl6wn43azyypje4d8wawumaql06oqvoqgvqkiyd0quzyxugd97u3in4i8mzg0c7chmm9j8nu3nx8u14x13u8tdjdzfe8t5d6z2f7i1nd0eot8k8ssh5so6ub7ax0xptfn5i7wkxob3tehiaerdd35dkcxkawakdmfc46imffqlqpkv2rabzt8r07o0w7ozyhpvykqlskaw3ogjal61gr9tuq998yujnwe01omk8l8elh6944etwswgltz3mfkpyg1mulhvy9jg3hyho3u48xcsqb02u6eth61f1egx6zn6s04dc0oqmw8fe3v605fnmithjb3fxlm8spkwo2bq9farugv3lh2qmvznztigu2l4v58jbpbj4d308wf2kgy2j2jscjh0q4fpd3x0fvosypofvuwzq1cqi10fn8vkvabwk5jfldnfwc4ca0clgtuf0a2wlu8ejcduj63c8uyzv8diphas4icvpllzbwq22g5tmnjkk66zzshr4iwn9k738lnt394t94heypm1b31paql6cff9xlvi952ddih7oquzss4vx4bf4qs01flo9uvx8oyrw676dmwczo5tueo1qf2xq8i3hkuvfxfhv8adqifyfylhdr9aidirgak23a1nlu4axgge1se1ndy0wl1nxki55h3cmecyycbqjsf84u85rr2nghfe4u4j81td9q1gk4tkku1ad3tjxjc1q6hsskfjib34bkbhq04pw629vqapzoehyl1ks2fakbq6jdllyu60tw56ef6bsjwcuo4xk3ixvvt5aksflc5gr3or7f1y58tlj36vznwpx66xef9p38iz6yyz4y062ez26vgqmilmu9iwe1k0n3njw8u8m73x45r3eh024ss50b2yjbwvwk50xthew6prmisotfljl14ki1qrp58ircttm68vjqqesu2mputvnd1jki48qhjwkdxy0pd22vulwxi3njz3skf0eh4yw1iio35q2sfjy2izsf4ar5ot6m9a6a3o2wt5qrb3hvjuskloxnes9t7mx7mdkgczrd6pzn3wjx6cmvzgew5blvbp3wrh4gn53za888yz0iwswxd7eab80wvgtbdvsxnop8rtlu55186ceo38ga71cpc99pkq7ydmaxy5h57pddwdsbqp2h9o7nd7gkf6ols8zrax4ip136ogcnr7n5nmfknhta4gowbnewemp4pq5v6abk24nkwpot49nb5fhtzhu5rwzud2fuukv1o39f6sxj7gwxczj8pu3eh1isy04cqyutzecl5r2027nobxkzb0qz4gsb6c9z7eq0upgjm69hxtl4gsqlsg7qt1l128s9mxv3kc371enint0w636ldnb26d21x1jb8pyzfpt1rdakhzk2vz2zygjuu0lh7fg96r95d9usryooyzgfdi7b4l9zmia4jpjb13g1f4v1nlwtq9gh7x88iaph07zm0pb55p8w6ozetcxh9bichyfkzjeg5mrz6f8yt3rwx5my2wdouizqvqvj7llwr1q4k1ot85mx9c02a5zgm4hxkz69y3pganpjglnih62h4cpppl58j3ckbti3xpuvh4nzp2w8xcx4rukd5ulfoed2z1yg3f2tqmqo52ye344goa6s9mrub7267bsfjdv25cg59ijk85om8v4vas2hmah2wlmtakyz2md7nvj8d2qtxboegtfvzewtz58z5o9zcuz6qooiz7d70jr5ft5e1w1kecxlsi2gg3pj4orrpyasu28tw8so8g3bjzdmw2pvaz5yr80i1edosuk82ommmf61skg9bdcqybcdhelh1r7xbum20zbp0aoh24ylc3hlmnrd7bltqp0msin5daxyh64uj1vnt5qj2sj5mwzv6i5iawlawaatvmvzypqg2ii2nrdl2s7bjm3t8mh6crvw86zhgange7uol4fxynwmop6b8gd4xxajcmkoctz73zawxjj42tev2xgt6vw33eh58nn05fjhdqgtjany3kiyef336oo6c2xqyhse8bw5u7aj2dcncswzvn2d947baydsqm18d5m35q6s07egzqm3lf742ozvnvhx73kr1wm8fg42hejadz8gli2909s18hx0i5a36rq8rsq5u5emyinvjuytratpajfqt8olkxvsl8zsiy83s3w == \i\m\w\a\4\j\5\n\a\g\a\o\i\a\h\u\6\7\6\v\a\c\o\z\i\k\5\m\3\9\h\w\l\l\q\l\x\a\i\3\o\s\n\c\k\5\a\y\o\a\l\u\h\p\o\p\o\v\8\h\x\5\3\w\8\e\q\q\h\l\v\o\8\6\p\3\r\6\c\o\d\g\j\7\0\o\v\m\x\9\a\n\x\y\3\p\x\y\5\v\a\x\3\j\w\q\e\o\o\k\9\n\8\a\y\e\v\7\o\x\h\o\5\y\g\a\d\r\r\4\i\0\2\l\q\5\8\r\2\g\c\v\h\0\n\0\m\r\9\8\7\1\m\z\s\3\m\f\9\9\5\a\n\f\0\x\0\0\x\h\t\v\f\y\0\j\t\1\t\f\v\m\s\5\d\p\m\1\n\y\i\q\q\m\b\n\e\t\q\y\7\u\5\m\y\b\d\p\y\d\9\k\z\w\m\v\6\3\1\g\r\a\u\i\o\w\l\4\i\3\y\d\v\4\x\7\3\g\d\f\d\v\z\6\2\s\o\9\h\c\i\5\a\9\5\1\u\q\h\m\7\6\a\p\y\f\s\q\z\z\c\u\d\h\7\k\8\r\p\7\z\r\r\2\v\v\d\m\f\t\2\4\8\o\i\5\y\v\s\l\w\7\n\m\6\t\c\m\a\9\1\4\e\a\r\b\p\f\f\n\w\5\1\4\o\0\u\r\k\r\o\0\q\c\s\u\h\y\7\2\9\p\l\z\u\x\5\2\v\9\a\a\b\a\t\v\v\r\b\8\7\c\s\c\q\2\g\w\0\4\m\w\w\f\h\t\p\g\t\5\c\f\g\2\u\5\i\t\7\1\7\k\3\9\r\7\j\x\6\i\q\6\l\k\n\s\l\b\m\2\j\n\f\d\u\a\6\b\7\0\j\c\z\6\m\l\e\v\i\j\x\6\9\g\2\x\8\o\d\y\3\n\s\l\x\b\k\n\4\9\t\0\6\9\o\s\0\s\8\j\i\o\4\d\m\1\d\t\3\j\0\5\a\k\g\n\j\p\a\l\8\2\i\e\3\s\j\p\4\u\m\z\x\4\z\6\k\7\2\3\s\0\x\5\t\8\e\d\6\k\1\z\h\j\z\h\9\9\r\i\x\c\u\i\6\q\s\w\3\3\k\d\w\o\z\x\s\d\3\2\t\b\v\a\y\x\0\m\o\f\o\l\4\q\s\0\a\4\o\m\k\r\i\u\n\c\f\7\p\v\c\z\7\c\c\0\p\0\r\k\9\b\d\4\1\3\2\m\m\e\v\c\o\o\v\r\g\l\n\7\a\f\u\l\o\7\m\l\x\4\3\f\z\0\9\t\5\q\t\s\u\m\l\s\d\h\5\0\3\w\3\6\c\1\5\k\l\s\0\7\9\2\5\p\y\i\f\w\d\e\r\k\4\e\7\u\w\b\j\u\1\c\l\j\j\5\8\l\u\l\i\c\b\i\0\v\5\m\w\s\f\4\1\5\x\r\k\v\a\a\a\0\a\8\5\u\v\o\6\3\u\6\c\j\x\5\4\0\u\z\u\t\j\d\v\1\e\q\5\3\p\p\e\u\c\w\t\j\v\f\9\f\m\c\1\4\y\2\w\d\8\x\u\u\9\o\r\a\3\t\n\u\x\e\m\z\7\k\p\0\4\2\b\b\1\n\t\a\2\u\s\9\v\2\g\r\5\k\j\q\l\0\y\c\r\s\i\8\3\i\u\t\s\w\d\p\7\q\j\y\z\k\z\j\k\h\z\a\u\m\z\t\8\l\p\n\s\1\y\6\m\u\m\q\7\c\j\a\y\m\x\i\o\l\t\h\g\d\x\w\k\8\5\g\i\6\2\7\2\e\0\4\i\9\9\h\u\f\9\o\i\8\h\i\6\z\4\a\6\2\e\q\g\9\w\6\o\g\m\q\h\x\3\g\7\n\d\u\3\y\2\l\a\r\q\e\v\s\b\l\d\6\p\w\n\f\0\3\q\l\t\y\l\c\2\7\g\9\z\u\p\h\m\h\k\k\d\t\q\r\0\k\7\p\b\6\y\g\3\f\g\w\b\d\j\v\c\7\8\0\5\2\a\6\s\o\d\2\l\9\4\w\f\p\x\c\4\k\l\g\c\9\w\a\z\o\t\e\c\2\2\p\p\f\a\n\b\e\d\3\e\m\g\w\4\u\3\h\x\e\1\8\e\p\c\b\j\h\n\5\o\9\k\5\9\f\w\i\1\c\d\z\0\t\c\7\p\v\k\k\s\f\v\8\q\y\i\4\j\o\4\h\8\c\d\i\3\s\j\u\a\b\y\4\t\v\p\s\f\r\w\t\7\d\z\2\m\w\t\3\o\n\8\m\9\z\q\p\m\f\a\x\1\t\h\7\w\z\p\7\i\k\9\8\c\3\c\c\z\h\y\z\b\k\3\f\e\e\b\6\d\j\x\s\i\1\a\a\9\a\a\y\w\7\y\7\y\i\q\j\3\q\g\j\e\9\5\2\o\8\8\5\t\g\w\z\o\h\9\i\1\u\o\7\s\4\n\q\6\7\5\c\0\d\4\6\i\b\6\n\e\w\s\w\a\1\n\5\a\d\q\e\0\5\z\y\f\8\7\o\r\s\b\w\5\e\d\8\d\v\6\8\8\7\l\s\6\n\i\z\h\0\3\2\a\s\r\s\s\3\3\j\n\6\p\d\r\5\k\1\m\o\n\j\1\d\7\o\t\m\4\8\p\a\q\v\3\d\x\o\f\k\h\2\k\k\y\r\n\3\c\x\s\c\6\n\2\r\w\f\8\v\a\x\e\y\c\s\z\3\o\m\p\q\l\0\6\7\n\t\4\u\r\g\5\n\5\f\o\p\1\3\m\u\o\b\c\n\q\a\b\i\l\k\j\n\m\m\g\5\0\e\5\x\3\e\r\0\r\n\u\7\d\g\n\1\8\x\3\9\9\f\v\f\d\n\2\4\5\s\n\b\y\a\x\2\0\w\x\x\6\c\g\g\o\3\3\h\u\0\s\p\s\i\q\l\s\j\r\u\c\d\o\h\8\y\q\5\b\w\u\e\9\5\u\x\p\v\n\6\l\d\7\r\8\n\o\1\t\p\o\l\2\v\z\b\g\0\m\a\t\g\3\s\m\o\j\a\m\1\v\u\6\r\6\8\l\t\f\z\a\3\2\3\o\t\q\8\r\i\j\7\a\q\7\f\y\7\e\f\d\3\4\w\9\g\p\5\p\b\k\c\o\s\k\t\o\s\c\6\0\a\b\l\0\p\h\b\1\w\o\m\8\q\y\j\g\q\j\2\2\l\l\8\x\d\b\w\k\k\v\w\u\v\k\6\u\l\7\g\6\t\6\7\q\u\h\y\p\1\2\r\5\g\i\z\r\t\d\9\0\z\3\x\7\7\w\o\c\x\g\r\b\q\3\v\5\6\n\j\8\9\q\q\1\m\e\6\z\p\y\j\1\6\e\2\u\q\y\c\n\x\p\l\6\o\9\u\g\j\r\z\j\6\3\z\4\6\6\r\8\1\4\z\i\b\h\8\a\c\s\o\d\z\u\p\w\d\u\m\l\4\3\x\a\s\q\9\x\y\l\8\v\d\h\k\g\w\c\t\e\c\a\y\d\a\c\z\1\q\9\z\2\d\n\u\m\1\k\v\x\8\n\r\y\k\x\g\p\i\b\5\2\y\6\8\q\5\9\5\0\x\w\o\6\7\1\m\o\3\j\o\s\6\w\p\5\6\0\5\0\s\v\7\o\m\e\u\p\e\q\g\x\x\a\c\g\1\4\4\r\n\s\e\j\i\j\g\p\h\x\3\k\j\c\g\q\a\m\m\i\d\q\g\1\1\s\u\9\k\h\b\l\2\s\a\0\0\n\c\j\d\x\f\r\u\h\1\c\6\z\6\p\s\2\2\v\2\v\w\r\k\d\3\l\5\8\r\s\4\y\q\1\4\e\7\n\6\b\8\7\k\p\7\a\7\2\y\b\g\s\t\e\u\l\o\2\w\o\k\z\q\c\v\0\s\w\r\9\h\v\s\6\u\s\e\c\o\8\r\2\s\1\z\1\e\8\e\a\j\d\1\x\l\g\s\j\6\j\l\f\v\h\g\m\z\b\7\x\u\b\c\3\w\p\a\v\f\7\l\0\7\9\n\q\3\d\z\j\9\z\v\x\6\4\u\p\0\s\8\p\w\x\6\s\v\5\u\1\s\y\4\b\x\3\w\c\j\l\6\w\n\4\3\a\z\y\y\p\j\e\4\d\8\w\a\w\u\m\a\q\l\0\6\o\q\v\o\q\g\v\q\k\i\y\d\0\q\u\z\y\x\u\g\d\9\7\u\3\i\n\4\i\8\m\z\g\0\c\7\c\h\m\m\9\j\8\n\u\3\n\x\8\u\1\4\x\1\3\u\8\t\d\j\d\z\f\e\8\t\5\d\6\z\2\f\7\i\1\n\d\0\e\o\t\8\k\8\s\s\h\5\s\o\6\u\b\7\a\x\0\x\p\t\f\n\5\i\7\w\k\x\o\b\3\t\e\h\i\a\e\r\d\d\3\5\d\k\c\x\k\a\w\a\k\d\m\f\c\4\6\i\m\f\f\q\l\q\p\k\v\2\r\a\b\z\t\8\r\0\7\o\0\w\7\o\z\y\h\p\v\y\k\q\l\s\k\a\w\3\o\g\j\a\l\6\1\g\r\9\t\u\q\9\9\8\y\u\j\n\w\e\0\1\o\m\k\8\l\8\e\l\h\6\9\4\4\e\t\w\s\w\g\l\t\z\3\m\f\k\p\y\g\1\m\u\l\h\v\y\9\j\g\3\h\y\h\o\3\u\4\8\x\c\s\q\b\0\2\u\6\e\t\h\6\1\f\1\e\g\x\6\z\n\6\s\0\4\d\c\0\o\q\m\w\8\f\e\3\v\6\0\5\f\n\m\i\t\h\j\b\3\f\x\l\m\8\s\p\k\w\o\2\b\q\9\f\a\r\u\g\v\3\l\h\2\q\m\v\z\n\z\t\i\g\u\2\l\4\v\5\8\j\b\p\b\j\4\d\3\0\8\w\f\2\k\g\y\2\j\2\j\s\c\j\h\0\q\4\f\p\d\3\x\0\f\v\o\s\y\p\o\f\v\u\w\z\q\1\c\q\i\1\0\f\n\8\v\k\v\a\b\w\k\5\j\f\l\d\n\f\w\c\4\c\a\0\c\l\g\t\u\f\0\a\2\w\l\u\8\e\j\c\d\u\j\6\3\c\8\u\y\z\v\8\d\i\p\h\a\s\4\i\c\v\p\l\l\z\b\w\q\2\2\g\5\t\m\n\j\k\k\6\6\z\z\s\h\r\4\i\w\n\9\k\7\3\8\l\n\t\3\9\4\t\9\4\h\e\y\p\m\1\b\3\1\p\a\q\l\6\c\f\f\9\x\l\v\i\9\5\2\d\d\i\h\7\o\q\u\z\s\s\4\v\x\4\b\f\4\q\s\0\1\f\l\o\9\u\v\x\8\o\y\r\w\6\7\6\d\m\w\c\z\o\5\t\u\e\o\1\q\f\2\x\q\8\i\3\h\k\u\v\f\x\f\h\v\8\a\d\q\i\f\y\f\y\l\h\d\r\9\a\i\d\i\r\g\a\k\2\3\a\1\n\l\u\4\a\x\g\g\e\1\s\e\1\n\d\y\0\w\l\1\n\x\k\i\5\5\h\3\c\m\e\c\y\y\c\b\q\j\s\f\8\4\u\8\5\r\r\2\n\g\h\f\e\4\u\4\j\8\1\t\d\9\q\1\g\k\4\t\k\k\u\1\a\d\3\t\j\x\j\c\1\q\6\h\s\s\k\f\j\i\b\3\4\b\k\b\h\q\0\4\p\w\6\2\9\v\q\a\p\z\o\e\h\y\l\1\k\s\2\f\a\k\b\q\6\j\d\l\l\y\u\6\0\t\w\5\6\e\f\6\b\s\j\w\c\u\o\4\x\k\3\i\x\v\v\t\5\a\k\s\f\l\c\5\g\r\3\o\r\7\f\1\y\5\8\t\l\j\3\6\v\z\n\w\p\x\6\6\x\e\f\9\p\3\8\i\z\6\y\y\z\4\y\0\6\2\e\z\2\6\v\g\q\m\i\l\m\u\9\i\w\e\1\k\0\n\3\n\j\w\8\u\8\m\7\3\x\4\5\r\3\e\h\0\2\4\s\s\5\0\b\2\y\j\b\w\v\w\k\5\0\x\t\h\e\w\6\p\r\m\i\s\o\t\f\l\j\l\1\4\k\i\1\q\r\p\5\8\i\r\c\t\t\m\6\8\v\j\q\q\e\s\u\2\m\p\u\t\v\n\d\1\j\k\i\4\8\q\h\j\w\k\d\x\y\0\p\d\2\2\v\u\l\w\x\i\3\n\j\z\3\s\k\f\0\e\h\4\y\w\1\i\i\o\3\5\q\2\s\f\j\y\2\i\z\s\f\4\a\r\5\o\t\6\m\9\a\6\a\3\o\2\w\t\5\q\r\b\3\h\v\j\u\s\k\l\o\x\n\e\s\9\t\7\m\x\7\m\d\k\g\c\z\r\d\6\p\z\n\3\w\j\x\6\c\m\v\z\g\e\w\5\b\l\v\b\p\3\w\r\h\4\g\n\5\3\z\a\8\8\8\y\z\0\i\w\s\w\x\d\7\e\a\b\8\0\w\v\g\t\b\d\v\s\x\n\o\p\8\r\t\l\u\5\5\1\8\6\c\e\o\3\8\g\a\7\1\c\p\c\9\9\p\k\q\7\y\d\m\a\x\y\5\h\5\7\p\d\d\w\d\s\b\q\p\2\h\9\o\7\n\d\7\g\k\f\6\o\l\s\8\z\r\a\x\4\i\p\1\3\6\o\g\c\n\r\7\n\5\n\m\f\k\n\h\t\a\4\g\o\w\b\n\e\w\e\m\p\4\p\q\5\v\6\a\b\k\2\4\n\k\w\p\o\t\4\9\n\b\5\f\h\t\z\h\u\5\r\w\z\u\d\2\f\u\u\k\v\1\o\3\9\f\6\s\x\j\7\g\w\x\c\z\j\8\p\u\3\e\h\1\i\s\y\0\4\c\q\y\u\t\z\e\c\l\5\r\2\0\2\7\n\o\b\x\k\z\b\0\q\z\4\g\s\b\6\c\9\z\7\e\q\0\u\p\g\j\m\6\9\h\x\t\l\4\g\s\q\l\s\g\7\q\t\1\l\1\2\8\s\9\m\x\v\3\k\c\3\7\1\e\n\i\n\t\0\w\6\3\6\l\d\n\b\2\6\d\2\1\x\1\j\b\8\p\y\z\f\p\t\1\r\d\a\k\h\z\k\2\v\z\2\z\y\g\j\u\u\0\l\h\7\f\g\9\6\r\9\5\d\9\u\s\r\y\o\o\y\z\g\f\d\i\7\b\4\l\9\z\m\i\a\4\j\p\j\b\1\3\g\1\f\4\v\1\n\l\w\t\q\9\g\h\7\x\8\8\i\a\p\h\0\7\z\m\0\p\b\5\5\p\8\w\6\o\z\e\t\c\x\h\9\b\i\c\h\y\f\k\z\j\e\g\5\m\r\z\6\f\8\y\t\3\r\w\x\5\m\y\2\w\d\o\u\i\z\q\v\q\v\j\7\l\l\w\r\1\q\4\k\1\o\t\8\5\m\x\9\c\0\2\a\5\z\g\m\4\h\x\k\z\6\9\y\3\p\g\a\n\p\j\g\l\n\i\h\6\2\h\4\c\p\p\p\l\5\8\j\3\c\k\b\t\i\3\x\p\u\v\h\4\n\z\p\2\w\8\x\c\x\4\r\u\k\d\5\u\l\f\o\e\d\2\z\1\y\g\3\f\2\t\q\m\q\o\5\2\y\e\3\4\4\g\o\a\6\s\9\m\r\u\b\7\2\6\7\b\s\f\j\d\v\2\5\c\g\5\9\i\j\k\8\5\o\m\8\v\4\v\a\s\2\h\m\a\h\2\w\l\m\t\a\k\y\z\2\m\d\7\n\v\j\8\d\2\q\t\x\b\o\e\g\t\f\v\z\e\w\t\z\5\8\z\5\o\9\z\c\u\z\6\q\o\o\i\z\7\d\7\0\j\r\5\f\t\5\e\1\w\1\k\e\c\x\l\s\i\2\g\g\3\p\j\4\o\r\r\p\y\a\s\u\2\8\t\w\8\s\o\8\g\3\b\j\z\d\m\w\2\p\v\a\z\5\y\r\8\0\i\1\e\d\o\s\u\k\8\2\o\m\m\m\f\6\1\s\k\g\9\b\d\c\q\y\b\c\d\h\e\l\h\1\r\7\x\b\u\m\2\0\z\b\p\0\a\o\h\2\4\y\l\c\3\h\l\m\n\r\d\7\b\l\t\q\p\0\m\s\i\n\5\d\a\x\y\h\6\4\u\j\1\v\n\t\5\q\j\2\s\j\5\m\w\z\v\6\i\5\i\a\w\l\a\w\a\a\t\v\m\v\z\y\p\q\g\2\i\i\2\n\r\d\l\2\s\7\b\j\m\3\t\8\m\h\6\c\r\v\w\8\6\z\h\g\a\n\g\e\7\u\o\l\4\f\x\y\n\w\m\o\p\6\b\8\g\d\4\x\x\a\j\c\m\k\o\c\t\z\7\3\z\a\w\x\j\j\4\2\t\e\v\2\x\g\t\6\v\w\3\3\e\h\5\8\n\n\0\5\f\j\h\d\q\g\t\j\a\n\y\3\k\i\y\e\f\3\3\6\o\o\6\c\2\x\q\y\h\s\e\8\b\w\5\u\7\a\j\2\d\c\n\c\s\w\z\v\n\2\d\9\4\7\b\a\y\d\s\q\m\1\8\d\5\m\3\5\q\6\s\0\7\e\g\z\q\m\3\l\f\7\4\2\o\z\v\n\v\h\x\7\3\k\r\1\w\m\8\f\g\4\2\h\e\j\a\d\z\8\g\l\i\2\9\0\9\s\1\8\h\x\0\i\5\a\3\6\r\q\8\r\s\q\5\u\5\e\m\y\i\n\v\j\u\y\t\r\a\t\p\a\j\f\q\t\8\o\l\k\x\v\s\l\8\z\s\i\y\8\3\s\3\w ]] 00:15:25.453 00:15:25.453 real 0m1.302s 00:15:25.453 user 0m0.885s 00:15:25.453 sys 0m0.596s 00:15:25.453 19:17:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:25.453 19:17:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:15:25.453 ************************************ 00:15:25.454 END TEST dd_rw_offset 00:15:25.454 ************************************ 00:15:25.454 19:17:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:15:25.454 19:17:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:15:25.454 19:17:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:15:25.454 19:17:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:15:25.454 19:17:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:15:25.454 19:17:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:15:25.454 19:17:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:15:25.454 19:17:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:15:25.454 19:17:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:15:25.454 19:17:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:15:25.454 19:17:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:15:25.454 [2024-10-17 19:17:34.672007] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:25.454 [2024-10-17 19:17:34.672105] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60216 ] 00:15:25.454 { 00:15:25.454 "subsystems": [ 00:15:25.454 { 00:15:25.454 "subsystem": "bdev", 00:15:25.454 "config": [ 00:15:25.454 { 00:15:25.454 "params": { 00:15:25.454 "trtype": "pcie", 00:15:25.454 "traddr": "0000:00:10.0", 00:15:25.454 "name": "Nvme0" 00:15:25.454 }, 00:15:25.454 "method": "bdev_nvme_attach_controller" 00:15:25.454 }, 00:15:25.454 { 00:15:25.454 "method": "bdev_wait_for_examine" 00:15:25.454 } 00:15:25.454 ] 00:15:25.454 } 00:15:25.454 ] 00:15:25.454 } 00:15:25.712 [2024-10-17 19:17:34.805511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.712 [2024-10-17 19:17:34.874096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.712 [2024-10-17 19:17:34.930016] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:25.970  [2024-10-17T19:17:35.487Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:15:26.229 00:15:26.229 19:17:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:26.229 00:15:26.229 real 0m17.935s 00:15:26.229 user 0m12.703s 00:15:26.229 sys 0m6.815s 00:15:26.229 19:17:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:26.229 19:17:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:15:26.229 ************************************ 00:15:26.229 END TEST spdk_dd_basic_rw 00:15:26.229 ************************************ 00:15:26.229 19:17:35 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:15:26.229 19:17:35 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:26.229 19:17:35 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:26.229 19:17:35 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:15:26.229 ************************************ 00:15:26.229 START TEST spdk_dd_posix 00:15:26.229 ************************************ 00:15:26.229 19:17:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:15:26.229 * Looking for test storage... 00:15:26.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:15:26.229 19:17:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:26.229 19:17:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # lcov --version 00:15:26.229 19:17:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:26.229 19:17:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:26.229 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:26.229 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:26.229 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:26.229 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:15:26.229 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:15:26.229 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:15:26.229 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:15:26.229 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:15:26.229 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:15:26.229 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:15:26.229 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:26.229 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:15:26.229 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:15:26.229 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:26.229 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:26.229 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:15:26.229 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:15:26.229 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:26.229 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:15:26.229 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:15:26.229 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:15:26.229 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:15:26.229 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:26.229 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:15:26.489 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:15:26.489 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:26.489 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:26.489 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:15:26.489 19:17:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:26.489 19:17:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:26.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.489 --rc genhtml_branch_coverage=1 00:15:26.489 --rc genhtml_function_coverage=1 00:15:26.489 --rc genhtml_legend=1 00:15:26.489 --rc geninfo_all_blocks=1 00:15:26.489 --rc geninfo_unexecuted_blocks=1 00:15:26.489 00:15:26.489 ' 00:15:26.489 19:17:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:26.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.489 --rc genhtml_branch_coverage=1 00:15:26.489 --rc genhtml_function_coverage=1 00:15:26.489 --rc genhtml_legend=1 00:15:26.489 --rc geninfo_all_blocks=1 00:15:26.489 --rc geninfo_unexecuted_blocks=1 00:15:26.489 00:15:26.489 ' 00:15:26.489 19:17:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:26.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.489 --rc genhtml_branch_coverage=1 00:15:26.489 --rc genhtml_function_coverage=1 00:15:26.489 --rc genhtml_legend=1 00:15:26.489 --rc geninfo_all_blocks=1 00:15:26.489 --rc geninfo_unexecuted_blocks=1 00:15:26.489 00:15:26.489 ' 00:15:26.489 19:17:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:26.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.489 --rc genhtml_branch_coverage=1 00:15:26.489 --rc genhtml_function_coverage=1 00:15:26.489 --rc genhtml_legend=1 00:15:26.489 --rc geninfo_all_blocks=1 00:15:26.489 --rc geninfo_unexecuted_blocks=1 00:15:26.489 00:15:26.489 ' 00:15:26.489 19:17:35 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:26.489 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:15:26.489 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.489 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.489 19:17:35 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.489 19:17:35 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.489 19:17:35 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.489 19:17:35 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.489 19:17:35 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:15:26.489 19:17:35 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.489 19:17:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:15:26.489 19:17:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:15:26.489 19:17:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:15:26.489 19:17:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:15:26.489 19:17:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:26.490 19:17:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:26.490 19:17:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:15:26.490 19:17:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:15:26.490 * First test run, liburing in use 00:15:26.490 19:17:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:15:26.490 19:17:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:26.490 19:17:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:26.490 19:17:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:15:26.490 ************************************ 00:15:26.490 START TEST dd_flag_append 00:15:26.490 ************************************ 00:15:26.490 19:17:35 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:15:26.490 19:17:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:15:26.490 19:17:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:15:26.490 19:17:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:15:26.490 19:17:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:15:26.490 19:17:35 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:15:26.490 19:17:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=02zi1wziy7pxj1wgmpxzntwxb4awsj3y 00:15:26.490 19:17:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:15:26.490 19:17:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:15:26.490 19:17:35 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:15:26.490 19:17:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=78j84o07lzp3knf402hbgytmnhibz19o 00:15:26.490 19:17:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 02zi1wziy7pxj1wgmpxzntwxb4awsj3y 00:15:26.490 19:17:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 78j84o07lzp3knf402hbgytmnhibz19o 00:15:26.490 19:17:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:15:26.490 [2024-10-17 19:17:35.568313] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:26.490 [2024-10-17 19:17:35.568468] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60288 ] 00:15:26.490 [2024-10-17 19:17:35.709955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.753 [2024-10-17 19:17:35.789256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.753 [2024-10-17 19:17:35.856299] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:26.753  [2024-10-17T19:17:36.270Z] Copying: 32/32 [B] (average 31 kBps) 00:15:27.012 00:15:27.012 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 78j84o07lzp3knf402hbgytmnhibz19o02zi1wziy7pxj1wgmpxzntwxb4awsj3y == \7\8\j\8\4\o\0\7\l\z\p\3\k\n\f\4\0\2\h\b\g\y\t\m\n\h\i\b\z\1\9\o\0\2\z\i\1\w\z\i\y\7\p\x\j\1\w\g\m\p\x\z\n\t\w\x\b\4\a\w\s\j\3\y ]] 00:15:27.012 00:15:27.012 real 0m0.610s 00:15:27.012 user 0m0.332s 00:15:27.012 sys 0m0.312s 00:15:27.012 ************************************ 00:15:27.012 END TEST dd_flag_append 00:15:27.012 ************************************ 00:15:27.012 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:27.012 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:15:27.012 19:17:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:15:27.012 19:17:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:27.012 19:17:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:27.012 19:17:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:15:27.012 ************************************ 00:15:27.012 START TEST dd_flag_directory 00:15:27.012 ************************************ 00:15:27.012 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:15:27.012 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:27.012 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:15:27.012 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:27.012 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:27.012 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:27.012 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:27.012 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:27.012 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:27.012 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:27.013 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:27.013 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:27.013 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:27.013 [2024-10-17 19:17:36.216270] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:27.013 [2024-10-17 19:17:36.216415] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60317 ] 00:15:27.271 [2024-10-17 19:17:36.352026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.271 [2024-10-17 19:17:36.422725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.271 [2024-10-17 19:17:36.477630] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:27.271 [2024-10-17 19:17:36.515969] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:15:27.271 [2024-10-17 19:17:36.516097] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:15:27.271 [2024-10-17 19:17:36.516145] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:27.530 [2024-10-17 19:17:36.632575] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:15:27.530 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:15:27.530 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:27.530 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:15:27.530 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:15:27.530 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:15:27.530 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:27.530 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:15:27.530 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:15:27.530 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:15:27.530 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:27.530 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:27.530 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:27.530 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:27.530 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:27.530 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:27.530 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:27.530 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:27.530 19:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:15:27.530 [2024-10-17 19:17:36.766817] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:27.530 [2024-10-17 19:17:36.766968] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60326 ] 00:15:27.789 [2024-10-17 19:17:36.906568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.789 [2024-10-17 19:17:36.975991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.789 [2024-10-17 19:17:37.029334] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:28.047 [2024-10-17 19:17:37.066663] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:15:28.047 [2024-10-17 19:17:37.066735] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:15:28.047 [2024-10-17 19:17:37.066757] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:28.047 [2024-10-17 19:17:37.184036] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:15:28.047 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:15:28.047 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:28.047 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:15:28.047 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:15:28.047 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:15:28.047 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:28.047 00:15:28.047 real 0m1.098s 00:15:28.047 user 0m0.602s 00:15:28.047 sys 0m0.285s 00:15:28.047 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:28.047 ************************************ 00:15:28.047 END TEST dd_flag_directory 00:15:28.047 ************************************ 00:15:28.047 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:15:28.047 19:17:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:15:28.047 19:17:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:28.047 19:17:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:28.047 19:17:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:15:28.305 ************************************ 00:15:28.305 START TEST dd_flag_nofollow 00:15:28.305 ************************************ 00:15:28.306 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:15:28.306 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:15:28.306 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:15:28.306 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:15:28.306 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:15:28.306 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:28.306 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:15:28.306 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:28.306 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:28.306 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:28.306 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:28.306 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:28.306 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:28.306 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:28.306 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:28.306 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:28.306 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:28.306 [2024-10-17 19:17:37.381090] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:28.306 [2024-10-17 19:17:37.381236] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60355 ] 00:15:28.306 [2024-10-17 19:17:37.522491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.564 [2024-10-17 19:17:37.597567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.564 [2024-10-17 19:17:37.655372] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:28.564 [2024-10-17 19:17:37.694575] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:15:28.564 [2024-10-17 19:17:37.694648] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:15:28.564 [2024-10-17 19:17:37.694673] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:28.564 [2024-10-17 19:17:37.809268] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:15:28.822 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:15:28.822 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:28.822 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:15:28.822 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:15:28.822 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:15:28.822 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:28.822 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:15:28.822 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:15:28.822 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:15:28.822 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:28.822 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:28.822 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:28.822 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:28.822 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:28.822 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:28.822 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:28.822 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:28.822 19:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:15:28.822 [2024-10-17 19:17:37.954344] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:28.822 [2024-10-17 19:17:37.954536] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60364 ] 00:15:29.080 [2024-10-17 19:17:38.097903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.080 [2024-10-17 19:17:38.168024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.080 [2024-10-17 19:17:38.222284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:29.081 [2024-10-17 19:17:38.259084] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:15:29.081 [2024-10-17 19:17:38.259166] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:15:29.081 [2024-10-17 19:17:38.259191] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:29.339 [2024-10-17 19:17:38.376694] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:15:29.339 19:17:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:15:29.339 19:17:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:29.339 19:17:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:15:29.339 19:17:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:15:29.339 19:17:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:15:29.339 19:17:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:29.339 19:17:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:15:29.339 19:17:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:15:29.339 19:17:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:15:29.339 19:17:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:29.339 [2024-10-17 19:17:38.514475] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:29.339 [2024-10-17 19:17:38.514599] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60376 ] 00:15:29.597 [2024-10-17 19:17:38.657411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.597 [2024-10-17 19:17:38.727284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.597 [2024-10-17 19:17:38.781919] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:29.597  [2024-10-17T19:17:39.113Z] Copying: 512/512 [B] (average 500 kBps) 00:15:29.855 00:15:29.855 19:17:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ aulks8zu38yntdwf35ffckblxthuzbalsojeh4mw1euigukelqppsexxcpt23pzp5ffb76k2o4k6mybg8o7cmrtr4h6e6y8em5kb2t4rncbvvcp8pq3yhs5un6a2mmz2bpcq4fz4811g2a0m5kwgn4809txqmoaqy14614if4s449pi6vf81ec9eii2uf6iirv75cdr01136bkg7wl5ya78hybusmk5yh9lwik3krmfnusc9d7r6g7zztag3tjlcmzgyo6nnfjyqvfs0i3fdi9h3y0mg7xokvbmoy6m64fpa8g4trrkh8pu84pmfq5idv9nmhd0wbybs20adkw43qw3xl7ivxiwaeymo6yih00p4sqykqkx1cnsckx29d4u4sgqgg5c64nwbkfdqsiyv0ubpdi87cb6fvdhjttpeu5edzpxnlw9sur5v97yochhxdn75akyjieevv9vtbzhrglbq0aqvh42zkziziosqk692vuhcfg8skztvf8alrf4q == \a\u\l\k\s\8\z\u\3\8\y\n\t\d\w\f\3\5\f\f\c\k\b\l\x\t\h\u\z\b\a\l\s\o\j\e\h\4\m\w\1\e\u\i\g\u\k\e\l\q\p\p\s\e\x\x\c\p\t\2\3\p\z\p\5\f\f\b\7\6\k\2\o\4\k\6\m\y\b\g\8\o\7\c\m\r\t\r\4\h\6\e\6\y\8\e\m\5\k\b\2\t\4\r\n\c\b\v\v\c\p\8\p\q\3\y\h\s\5\u\n\6\a\2\m\m\z\2\b\p\c\q\4\f\z\4\8\1\1\g\2\a\0\m\5\k\w\g\n\4\8\0\9\t\x\q\m\o\a\q\y\1\4\6\1\4\i\f\4\s\4\4\9\p\i\6\v\f\8\1\e\c\9\e\i\i\2\u\f\6\i\i\r\v\7\5\c\d\r\0\1\1\3\6\b\k\g\7\w\l\5\y\a\7\8\h\y\b\u\s\m\k\5\y\h\9\l\w\i\k\3\k\r\m\f\n\u\s\c\9\d\7\r\6\g\7\z\z\t\a\g\3\t\j\l\c\m\z\g\y\o\6\n\n\f\j\y\q\v\f\s\0\i\3\f\d\i\9\h\3\y\0\m\g\7\x\o\k\v\b\m\o\y\6\m\6\4\f\p\a\8\g\4\t\r\r\k\h\8\p\u\8\4\p\m\f\q\5\i\d\v\9\n\m\h\d\0\w\b\y\b\s\2\0\a\d\k\w\4\3\q\w\3\x\l\7\i\v\x\i\w\a\e\y\m\o\6\y\i\h\0\0\p\4\s\q\y\k\q\k\x\1\c\n\s\c\k\x\2\9\d\4\u\4\s\g\q\g\g\5\c\6\4\n\w\b\k\f\d\q\s\i\y\v\0\u\b\p\d\i\8\7\c\b\6\f\v\d\h\j\t\t\p\e\u\5\e\d\z\p\x\n\l\w\9\s\u\r\5\v\9\7\y\o\c\h\h\x\d\n\7\5\a\k\y\j\i\e\e\v\v\9\v\t\b\z\h\r\g\l\b\q\0\a\q\v\h\4\2\z\k\z\i\z\i\o\s\q\k\6\9\2\v\u\h\c\f\g\8\s\k\z\t\v\f\8\a\l\r\f\4\q ]] 00:15:29.855 00:15:29.855 real 0m1.702s 00:15:29.855 user 0m0.927s 00:15:29.855 sys 0m0.589s 00:15:29.855 ************************************ 00:15:29.855 END TEST dd_flag_nofollow 00:15:29.855 ************************************ 00:15:29.855 19:17:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:29.855 19:17:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:15:29.855 19:17:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:15:29.855 19:17:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:29.855 19:17:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:29.855 19:17:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:15:29.855 ************************************ 00:15:29.855 START TEST dd_flag_noatime 00:15:29.855 ************************************ 00:15:29.855 19:17:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:15:29.855 19:17:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:15:29.855 19:17:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:15:29.855 19:17:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:15:29.855 19:17:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:15:29.855 19:17:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:15:29.855 19:17:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:29.855 19:17:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1729192658 00:15:29.855 19:17:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:29.855 19:17:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1729192659 00:15:29.855 19:17:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:15:31.230 19:17:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:31.230 [2024-10-17 19:17:40.147779] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:31.230 [2024-10-17 19:17:40.147908] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60415 ] 00:15:31.230 [2024-10-17 19:17:40.286536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.230 [2024-10-17 19:17:40.362030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.230 [2024-10-17 19:17:40.423196] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:31.230  [2024-10-17T19:17:40.747Z] Copying: 512/512 [B] (average 500 kBps) 00:15:31.489 00:15:31.489 19:17:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:31.489 19:17:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1729192658 )) 00:15:31.489 19:17:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:31.489 19:17:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1729192659 )) 00:15:31.489 19:17:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:31.489 [2024-10-17 19:17:40.727444] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:31.489 [2024-10-17 19:17:40.727589] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60433 ] 00:15:31.748 [2024-10-17 19:17:40.863761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.748 [2024-10-17 19:17:40.937333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.748 [2024-10-17 19:17:40.996823] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:32.006  [2024-10-17T19:17:41.264Z] Copying: 512/512 [B] (average 500 kBps) 00:15:32.006 00:15:32.006 19:17:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:32.006 19:17:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1729192661 )) 00:15:32.006 00:15:32.006 real 0m2.174s 00:15:32.006 user 0m0.635s 00:15:32.006 sys 0m0.582s 00:15:32.006 19:17:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:32.006 ************************************ 00:15:32.006 END TEST dd_flag_noatime 00:15:32.006 ************************************ 00:15:32.006 19:17:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:15:32.263 19:17:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:15:32.263 19:17:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:32.263 19:17:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:32.263 19:17:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:15:32.263 ************************************ 00:15:32.263 START TEST dd_flags_misc 00:15:32.263 ************************************ 00:15:32.263 19:17:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:15:32.263 19:17:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:15:32.263 19:17:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:15:32.263 19:17:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:15:32.263 19:17:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:15:32.263 19:17:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:15:32.263 19:17:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:15:32.263 19:17:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:15:32.263 19:17:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:32.263 19:17:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:15:32.263 [2024-10-17 19:17:41.354077] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:32.263 [2024-10-17 19:17:41.354205] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60462 ] 00:15:32.263 [2024-10-17 19:17:41.489714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.522 [2024-10-17 19:17:41.559155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.522 [2024-10-17 19:17:41.613302] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:32.522  [2024-10-17T19:17:42.038Z] Copying: 512/512 [B] (average 500 kBps) 00:15:32.780 00:15:32.780 19:17:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ e227r8i206012sot2adus252k6ti1wavucle9fjzztmflddeb0j9l381gyw814dqf53ooepdqssvrzejd6yeiv476okmiwy0r1oupd0flpigp483semsojz8i43yw0zeagzvak221pzikcmlrk2x06mi7tma8nhuhi7bwgcjpkgnh3uehiufkg4hvshfve1enbilz2yzvryvgk1scgjefi9edr7tk6ed0h7oazh91olxwiw0n4gnfzsox9fwqwwpdf4t6kp87sfwwbz652b2mw34beo21ds3mbtfb45mdfnkoz928j52qd7v336andq7x5hc8d0e74hhjr1gdx96u8y893scsyqhdeumctlnix2kms0sya2h8wtoc5twv9tauh5s86fodfqyvcrx1o21ltjx625kxmtpnm7sbk6fpncvc0zdgyelvg4isk0ed4k17i4biv2l70ovnrumxld9xii34cxx32rm0ps3kdlq6vcpwxqry2737tx4k4t7c6rb == \e\2\2\7\r\8\i\2\0\6\0\1\2\s\o\t\2\a\d\u\s\2\5\2\k\6\t\i\1\w\a\v\u\c\l\e\9\f\j\z\z\t\m\f\l\d\d\e\b\0\j\9\l\3\8\1\g\y\w\8\1\4\d\q\f\5\3\o\o\e\p\d\q\s\s\v\r\z\e\j\d\6\y\e\i\v\4\7\6\o\k\m\i\w\y\0\r\1\o\u\p\d\0\f\l\p\i\g\p\4\8\3\s\e\m\s\o\j\z\8\i\4\3\y\w\0\z\e\a\g\z\v\a\k\2\2\1\p\z\i\k\c\m\l\r\k\2\x\0\6\m\i\7\t\m\a\8\n\h\u\h\i\7\b\w\g\c\j\p\k\g\n\h\3\u\e\h\i\u\f\k\g\4\h\v\s\h\f\v\e\1\e\n\b\i\l\z\2\y\z\v\r\y\v\g\k\1\s\c\g\j\e\f\i\9\e\d\r\7\t\k\6\e\d\0\h\7\o\a\z\h\9\1\o\l\x\w\i\w\0\n\4\g\n\f\z\s\o\x\9\f\w\q\w\w\p\d\f\4\t\6\k\p\8\7\s\f\w\w\b\z\6\5\2\b\2\m\w\3\4\b\e\o\2\1\d\s\3\m\b\t\f\b\4\5\m\d\f\n\k\o\z\9\2\8\j\5\2\q\d\7\v\3\3\6\a\n\d\q\7\x\5\h\c\8\d\0\e\7\4\h\h\j\r\1\g\d\x\9\6\u\8\y\8\9\3\s\c\s\y\q\h\d\e\u\m\c\t\l\n\i\x\2\k\m\s\0\s\y\a\2\h\8\w\t\o\c\5\t\w\v\9\t\a\u\h\5\s\8\6\f\o\d\f\q\y\v\c\r\x\1\o\2\1\l\t\j\x\6\2\5\k\x\m\t\p\n\m\7\s\b\k\6\f\p\n\c\v\c\0\z\d\g\y\e\l\v\g\4\i\s\k\0\e\d\4\k\1\7\i\4\b\i\v\2\l\7\0\o\v\n\r\u\m\x\l\d\9\x\i\i\3\4\c\x\x\3\2\r\m\0\p\s\3\k\d\l\q\6\v\c\p\w\x\q\r\y\2\7\3\7\t\x\4\k\4\t\7\c\6\r\b ]] 00:15:32.780 19:17:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:32.780 19:17:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:15:32.780 [2024-10-17 19:17:41.892636] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:32.780 [2024-10-17 19:17:41.892760] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60471 ] 00:15:32.780 [2024-10-17 19:17:42.032711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.038 [2024-10-17 19:17:42.099206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.038 [2024-10-17 19:17:42.152766] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:33.038  [2024-10-17T19:17:42.555Z] Copying: 512/512 [B] (average 500 kBps) 00:15:33.297 00:15:33.297 19:17:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ e227r8i206012sot2adus252k6ti1wavucle9fjzztmflddeb0j9l381gyw814dqf53ooepdqssvrzejd6yeiv476okmiwy0r1oupd0flpigp483semsojz8i43yw0zeagzvak221pzikcmlrk2x06mi7tma8nhuhi7bwgcjpkgnh3uehiufkg4hvshfve1enbilz2yzvryvgk1scgjefi9edr7tk6ed0h7oazh91olxwiw0n4gnfzsox9fwqwwpdf4t6kp87sfwwbz652b2mw34beo21ds3mbtfb45mdfnkoz928j52qd7v336andq7x5hc8d0e74hhjr1gdx96u8y893scsyqhdeumctlnix2kms0sya2h8wtoc5twv9tauh5s86fodfqyvcrx1o21ltjx625kxmtpnm7sbk6fpncvc0zdgyelvg4isk0ed4k17i4biv2l70ovnrumxld9xii34cxx32rm0ps3kdlq6vcpwxqry2737tx4k4t7c6rb == \e\2\2\7\r\8\i\2\0\6\0\1\2\s\o\t\2\a\d\u\s\2\5\2\k\6\t\i\1\w\a\v\u\c\l\e\9\f\j\z\z\t\m\f\l\d\d\e\b\0\j\9\l\3\8\1\g\y\w\8\1\4\d\q\f\5\3\o\o\e\p\d\q\s\s\v\r\z\e\j\d\6\y\e\i\v\4\7\6\o\k\m\i\w\y\0\r\1\o\u\p\d\0\f\l\p\i\g\p\4\8\3\s\e\m\s\o\j\z\8\i\4\3\y\w\0\z\e\a\g\z\v\a\k\2\2\1\p\z\i\k\c\m\l\r\k\2\x\0\6\m\i\7\t\m\a\8\n\h\u\h\i\7\b\w\g\c\j\p\k\g\n\h\3\u\e\h\i\u\f\k\g\4\h\v\s\h\f\v\e\1\e\n\b\i\l\z\2\y\z\v\r\y\v\g\k\1\s\c\g\j\e\f\i\9\e\d\r\7\t\k\6\e\d\0\h\7\o\a\z\h\9\1\o\l\x\w\i\w\0\n\4\g\n\f\z\s\o\x\9\f\w\q\w\w\p\d\f\4\t\6\k\p\8\7\s\f\w\w\b\z\6\5\2\b\2\m\w\3\4\b\e\o\2\1\d\s\3\m\b\t\f\b\4\5\m\d\f\n\k\o\z\9\2\8\j\5\2\q\d\7\v\3\3\6\a\n\d\q\7\x\5\h\c\8\d\0\e\7\4\h\h\j\r\1\g\d\x\9\6\u\8\y\8\9\3\s\c\s\y\q\h\d\e\u\m\c\t\l\n\i\x\2\k\m\s\0\s\y\a\2\h\8\w\t\o\c\5\t\w\v\9\t\a\u\h\5\s\8\6\f\o\d\f\q\y\v\c\r\x\1\o\2\1\l\t\j\x\6\2\5\k\x\m\t\p\n\m\7\s\b\k\6\f\p\n\c\v\c\0\z\d\g\y\e\l\v\g\4\i\s\k\0\e\d\4\k\1\7\i\4\b\i\v\2\l\7\0\o\v\n\r\u\m\x\l\d\9\x\i\i\3\4\c\x\x\3\2\r\m\0\p\s\3\k\d\l\q\6\v\c\p\w\x\q\r\y\2\7\3\7\t\x\4\k\4\t\7\c\6\r\b ]] 00:15:33.297 19:17:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:33.297 19:17:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:15:33.297 [2024-10-17 19:17:42.431094] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:33.297 [2024-10-17 19:17:42.431237] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60481 ] 00:15:33.555 [2024-10-17 19:17:42.570916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.555 [2024-10-17 19:17:42.644284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.555 [2024-10-17 19:17:42.700445] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:33.555  [2024-10-17T19:17:43.072Z] Copying: 512/512 [B] (average 250 kBps) 00:15:33.814 00:15:33.814 19:17:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ e227r8i206012sot2adus252k6ti1wavucle9fjzztmflddeb0j9l381gyw814dqf53ooepdqssvrzejd6yeiv476okmiwy0r1oupd0flpigp483semsojz8i43yw0zeagzvak221pzikcmlrk2x06mi7tma8nhuhi7bwgcjpkgnh3uehiufkg4hvshfve1enbilz2yzvryvgk1scgjefi9edr7tk6ed0h7oazh91olxwiw0n4gnfzsox9fwqwwpdf4t6kp87sfwwbz652b2mw34beo21ds3mbtfb45mdfnkoz928j52qd7v336andq7x5hc8d0e74hhjr1gdx96u8y893scsyqhdeumctlnix2kms0sya2h8wtoc5twv9tauh5s86fodfqyvcrx1o21ltjx625kxmtpnm7sbk6fpncvc0zdgyelvg4isk0ed4k17i4biv2l70ovnrumxld9xii34cxx32rm0ps3kdlq6vcpwxqry2737tx4k4t7c6rb == \e\2\2\7\r\8\i\2\0\6\0\1\2\s\o\t\2\a\d\u\s\2\5\2\k\6\t\i\1\w\a\v\u\c\l\e\9\f\j\z\z\t\m\f\l\d\d\e\b\0\j\9\l\3\8\1\g\y\w\8\1\4\d\q\f\5\3\o\o\e\p\d\q\s\s\v\r\z\e\j\d\6\y\e\i\v\4\7\6\o\k\m\i\w\y\0\r\1\o\u\p\d\0\f\l\p\i\g\p\4\8\3\s\e\m\s\o\j\z\8\i\4\3\y\w\0\z\e\a\g\z\v\a\k\2\2\1\p\z\i\k\c\m\l\r\k\2\x\0\6\m\i\7\t\m\a\8\n\h\u\h\i\7\b\w\g\c\j\p\k\g\n\h\3\u\e\h\i\u\f\k\g\4\h\v\s\h\f\v\e\1\e\n\b\i\l\z\2\y\z\v\r\y\v\g\k\1\s\c\g\j\e\f\i\9\e\d\r\7\t\k\6\e\d\0\h\7\o\a\z\h\9\1\o\l\x\w\i\w\0\n\4\g\n\f\z\s\o\x\9\f\w\q\w\w\p\d\f\4\t\6\k\p\8\7\s\f\w\w\b\z\6\5\2\b\2\m\w\3\4\b\e\o\2\1\d\s\3\m\b\t\f\b\4\5\m\d\f\n\k\o\z\9\2\8\j\5\2\q\d\7\v\3\3\6\a\n\d\q\7\x\5\h\c\8\d\0\e\7\4\h\h\j\r\1\g\d\x\9\6\u\8\y\8\9\3\s\c\s\y\q\h\d\e\u\m\c\t\l\n\i\x\2\k\m\s\0\s\y\a\2\h\8\w\t\o\c\5\t\w\v\9\t\a\u\h\5\s\8\6\f\o\d\f\q\y\v\c\r\x\1\o\2\1\l\t\j\x\6\2\5\k\x\m\t\p\n\m\7\s\b\k\6\f\p\n\c\v\c\0\z\d\g\y\e\l\v\g\4\i\s\k\0\e\d\4\k\1\7\i\4\b\i\v\2\l\7\0\o\v\n\r\u\m\x\l\d\9\x\i\i\3\4\c\x\x\3\2\r\m\0\p\s\3\k\d\l\q\6\v\c\p\w\x\q\r\y\2\7\3\7\t\x\4\k\4\t\7\c\6\r\b ]] 00:15:33.814 19:17:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:33.814 19:17:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:15:33.814 [2024-10-17 19:17:42.995958] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:33.814 [2024-10-17 19:17:42.996080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60490 ] 00:15:34.072 [2024-10-17 19:17:43.137797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.072 [2024-10-17 19:17:43.213949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.072 [2024-10-17 19:17:43.272065] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:34.072  [2024-10-17T19:17:43.588Z] Copying: 512/512 [B] (average 250 kBps) 00:15:34.330 00:15:34.330 19:17:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ e227r8i206012sot2adus252k6ti1wavucle9fjzztmflddeb0j9l381gyw814dqf53ooepdqssvrzejd6yeiv476okmiwy0r1oupd0flpigp483semsojz8i43yw0zeagzvak221pzikcmlrk2x06mi7tma8nhuhi7bwgcjpkgnh3uehiufkg4hvshfve1enbilz2yzvryvgk1scgjefi9edr7tk6ed0h7oazh91olxwiw0n4gnfzsox9fwqwwpdf4t6kp87sfwwbz652b2mw34beo21ds3mbtfb45mdfnkoz928j52qd7v336andq7x5hc8d0e74hhjr1gdx96u8y893scsyqhdeumctlnix2kms0sya2h8wtoc5twv9tauh5s86fodfqyvcrx1o21ltjx625kxmtpnm7sbk6fpncvc0zdgyelvg4isk0ed4k17i4biv2l70ovnrumxld9xii34cxx32rm0ps3kdlq6vcpwxqry2737tx4k4t7c6rb == \e\2\2\7\r\8\i\2\0\6\0\1\2\s\o\t\2\a\d\u\s\2\5\2\k\6\t\i\1\w\a\v\u\c\l\e\9\f\j\z\z\t\m\f\l\d\d\e\b\0\j\9\l\3\8\1\g\y\w\8\1\4\d\q\f\5\3\o\o\e\p\d\q\s\s\v\r\z\e\j\d\6\y\e\i\v\4\7\6\o\k\m\i\w\y\0\r\1\o\u\p\d\0\f\l\p\i\g\p\4\8\3\s\e\m\s\o\j\z\8\i\4\3\y\w\0\z\e\a\g\z\v\a\k\2\2\1\p\z\i\k\c\m\l\r\k\2\x\0\6\m\i\7\t\m\a\8\n\h\u\h\i\7\b\w\g\c\j\p\k\g\n\h\3\u\e\h\i\u\f\k\g\4\h\v\s\h\f\v\e\1\e\n\b\i\l\z\2\y\z\v\r\y\v\g\k\1\s\c\g\j\e\f\i\9\e\d\r\7\t\k\6\e\d\0\h\7\o\a\z\h\9\1\o\l\x\w\i\w\0\n\4\g\n\f\z\s\o\x\9\f\w\q\w\w\p\d\f\4\t\6\k\p\8\7\s\f\w\w\b\z\6\5\2\b\2\m\w\3\4\b\e\o\2\1\d\s\3\m\b\t\f\b\4\5\m\d\f\n\k\o\z\9\2\8\j\5\2\q\d\7\v\3\3\6\a\n\d\q\7\x\5\h\c\8\d\0\e\7\4\h\h\j\r\1\g\d\x\9\6\u\8\y\8\9\3\s\c\s\y\q\h\d\e\u\m\c\t\l\n\i\x\2\k\m\s\0\s\y\a\2\h\8\w\t\o\c\5\t\w\v\9\t\a\u\h\5\s\8\6\f\o\d\f\q\y\v\c\r\x\1\o\2\1\l\t\j\x\6\2\5\k\x\m\t\p\n\m\7\s\b\k\6\f\p\n\c\v\c\0\z\d\g\y\e\l\v\g\4\i\s\k\0\e\d\4\k\1\7\i\4\b\i\v\2\l\7\0\o\v\n\r\u\m\x\l\d\9\x\i\i\3\4\c\x\x\3\2\r\m\0\p\s\3\k\d\l\q\6\v\c\p\w\x\q\r\y\2\7\3\7\t\x\4\k\4\t\7\c\6\r\b ]] 00:15:34.330 19:17:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:15:34.330 19:17:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:15:34.330 19:17:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:15:34.330 19:17:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:15:34.330 19:17:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:34.330 19:17:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:15:34.330 [2024-10-17 19:17:43.580565] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:34.330 [2024-10-17 19:17:43.580911] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60500 ] 00:15:34.589 [2024-10-17 19:17:43.718719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.589 [2024-10-17 19:17:43.786736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.589 [2024-10-17 19:17:43.841984] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:34.847  [2024-10-17T19:17:44.105Z] Copying: 512/512 [B] (average 500 kBps) 00:15:34.847 00:15:34.847 19:17:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 26s8aao2ikcsiqims4ovl9pkp0fyuv0j87tk8hbkpgrrcjxq0eculdh36awqew7gz609keovoxz60k689a3now45naozifwicgp12ajesi3pvfo175l5mia6n42mplcbkz3bf6ddnabljoj64prvqbu0emutl27hg3obogvs8zlbo56jmr26p41lr4nn0ecbe7xcmkk87qgwfzddpxfg0mtood5da57jruv3zwlcv69p7vgeu5wi7a8g28p2sw1qsbev1og2ccqqzre9xnxuqvpnllpgy1t1lmg5zifafi8tyx09whhvnvmzn6ubez637y8qxglerwkfno393yhmogyt8bz3bnuten5nlmd7ql25o0dw8ygzlednbjp6guhydypitav7jutw5eccep8yop4drokwb9nnt1mi6xvvnk1is88gptfmmbzod52ukc0n2ilkbxj9jiq1eq0velst8orh8ia0vyt6mlhdkhazwr9b29x6jixvq97tb6ndlr17 == \2\6\s\8\a\a\o\2\i\k\c\s\i\q\i\m\s\4\o\v\l\9\p\k\p\0\f\y\u\v\0\j\8\7\t\k\8\h\b\k\p\g\r\r\c\j\x\q\0\e\c\u\l\d\h\3\6\a\w\q\e\w\7\g\z\6\0\9\k\e\o\v\o\x\z\6\0\k\6\8\9\a\3\n\o\w\4\5\n\a\o\z\i\f\w\i\c\g\p\1\2\a\j\e\s\i\3\p\v\f\o\1\7\5\l\5\m\i\a\6\n\4\2\m\p\l\c\b\k\z\3\b\f\6\d\d\n\a\b\l\j\o\j\6\4\p\r\v\q\b\u\0\e\m\u\t\l\2\7\h\g\3\o\b\o\g\v\s\8\z\l\b\o\5\6\j\m\r\2\6\p\4\1\l\r\4\n\n\0\e\c\b\e\7\x\c\m\k\k\8\7\q\g\w\f\z\d\d\p\x\f\g\0\m\t\o\o\d\5\d\a\5\7\j\r\u\v\3\z\w\l\c\v\6\9\p\7\v\g\e\u\5\w\i\7\a\8\g\2\8\p\2\s\w\1\q\s\b\e\v\1\o\g\2\c\c\q\q\z\r\e\9\x\n\x\u\q\v\p\n\l\l\p\g\y\1\t\1\l\m\g\5\z\i\f\a\f\i\8\t\y\x\0\9\w\h\h\v\n\v\m\z\n\6\u\b\e\z\6\3\7\y\8\q\x\g\l\e\r\w\k\f\n\o\3\9\3\y\h\m\o\g\y\t\8\b\z\3\b\n\u\t\e\n\5\n\l\m\d\7\q\l\2\5\o\0\d\w\8\y\g\z\l\e\d\n\b\j\p\6\g\u\h\y\d\y\p\i\t\a\v\7\j\u\t\w\5\e\c\c\e\p\8\y\o\p\4\d\r\o\k\w\b\9\n\n\t\1\m\i\6\x\v\v\n\k\1\i\s\8\8\g\p\t\f\m\m\b\z\o\d\5\2\u\k\c\0\n\2\i\l\k\b\x\j\9\j\i\q\1\e\q\0\v\e\l\s\t\8\o\r\h\8\i\a\0\v\y\t\6\m\l\h\d\k\h\a\z\w\r\9\b\2\9\x\6\j\i\x\v\q\9\7\t\b\6\n\d\l\r\1\7 ]] 00:15:34.847 19:17:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:34.847 19:17:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:15:35.106 [2024-10-17 19:17:44.131749] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:35.106 [2024-10-17 19:17:44.131883] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60509 ] 00:15:35.106 [2024-10-17 19:17:44.272201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.106 [2024-10-17 19:17:44.347683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.364 [2024-10-17 19:17:44.406892] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:35.364  [2024-10-17T19:17:44.881Z] Copying: 512/512 [B] (average 500 kBps) 00:15:35.623 00:15:35.623 19:17:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 26s8aao2ikcsiqims4ovl9pkp0fyuv0j87tk8hbkpgrrcjxq0eculdh36awqew7gz609keovoxz60k689a3now45naozifwicgp12ajesi3pvfo175l5mia6n42mplcbkz3bf6ddnabljoj64prvqbu0emutl27hg3obogvs8zlbo56jmr26p41lr4nn0ecbe7xcmkk87qgwfzddpxfg0mtood5da57jruv3zwlcv69p7vgeu5wi7a8g28p2sw1qsbev1og2ccqqzre9xnxuqvpnllpgy1t1lmg5zifafi8tyx09whhvnvmzn6ubez637y8qxglerwkfno393yhmogyt8bz3bnuten5nlmd7ql25o0dw8ygzlednbjp6guhydypitav7jutw5eccep8yop4drokwb9nnt1mi6xvvnk1is88gptfmmbzod52ukc0n2ilkbxj9jiq1eq0velst8orh8ia0vyt6mlhdkhazwr9b29x6jixvq97tb6ndlr17 == \2\6\s\8\a\a\o\2\i\k\c\s\i\q\i\m\s\4\o\v\l\9\p\k\p\0\f\y\u\v\0\j\8\7\t\k\8\h\b\k\p\g\r\r\c\j\x\q\0\e\c\u\l\d\h\3\6\a\w\q\e\w\7\g\z\6\0\9\k\e\o\v\o\x\z\6\0\k\6\8\9\a\3\n\o\w\4\5\n\a\o\z\i\f\w\i\c\g\p\1\2\a\j\e\s\i\3\p\v\f\o\1\7\5\l\5\m\i\a\6\n\4\2\m\p\l\c\b\k\z\3\b\f\6\d\d\n\a\b\l\j\o\j\6\4\p\r\v\q\b\u\0\e\m\u\t\l\2\7\h\g\3\o\b\o\g\v\s\8\z\l\b\o\5\6\j\m\r\2\6\p\4\1\l\r\4\n\n\0\e\c\b\e\7\x\c\m\k\k\8\7\q\g\w\f\z\d\d\p\x\f\g\0\m\t\o\o\d\5\d\a\5\7\j\r\u\v\3\z\w\l\c\v\6\9\p\7\v\g\e\u\5\w\i\7\a\8\g\2\8\p\2\s\w\1\q\s\b\e\v\1\o\g\2\c\c\q\q\z\r\e\9\x\n\x\u\q\v\p\n\l\l\p\g\y\1\t\1\l\m\g\5\z\i\f\a\f\i\8\t\y\x\0\9\w\h\h\v\n\v\m\z\n\6\u\b\e\z\6\3\7\y\8\q\x\g\l\e\r\w\k\f\n\o\3\9\3\y\h\m\o\g\y\t\8\b\z\3\b\n\u\t\e\n\5\n\l\m\d\7\q\l\2\5\o\0\d\w\8\y\g\z\l\e\d\n\b\j\p\6\g\u\h\y\d\y\p\i\t\a\v\7\j\u\t\w\5\e\c\c\e\p\8\y\o\p\4\d\r\o\k\w\b\9\n\n\t\1\m\i\6\x\v\v\n\k\1\i\s\8\8\g\p\t\f\m\m\b\z\o\d\5\2\u\k\c\0\n\2\i\l\k\b\x\j\9\j\i\q\1\e\q\0\v\e\l\s\t\8\o\r\h\8\i\a\0\v\y\t\6\m\l\h\d\k\h\a\z\w\r\9\b\2\9\x\6\j\i\x\v\q\9\7\t\b\6\n\d\l\r\1\7 ]] 00:15:35.623 19:17:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:35.623 19:17:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:15:35.623 [2024-10-17 19:17:44.702212] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:35.623 [2024-10-17 19:17:44.702375] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60519 ] 00:15:35.623 [2024-10-17 19:17:44.841677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.882 [2024-10-17 19:17:44.917958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.882 [2024-10-17 19:17:44.975410] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:35.882  [2024-10-17T19:17:45.398Z] Copying: 512/512 [B] (average 166 kBps) 00:15:36.140 00:15:36.140 19:17:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 26s8aao2ikcsiqims4ovl9pkp0fyuv0j87tk8hbkpgrrcjxq0eculdh36awqew7gz609keovoxz60k689a3now45naozifwicgp12ajesi3pvfo175l5mia6n42mplcbkz3bf6ddnabljoj64prvqbu0emutl27hg3obogvs8zlbo56jmr26p41lr4nn0ecbe7xcmkk87qgwfzddpxfg0mtood5da57jruv3zwlcv69p7vgeu5wi7a8g28p2sw1qsbev1og2ccqqzre9xnxuqvpnllpgy1t1lmg5zifafi8tyx09whhvnvmzn6ubez637y8qxglerwkfno393yhmogyt8bz3bnuten5nlmd7ql25o0dw8ygzlednbjp6guhydypitav7jutw5eccep8yop4drokwb9nnt1mi6xvvnk1is88gptfmmbzod52ukc0n2ilkbxj9jiq1eq0velst8orh8ia0vyt6mlhdkhazwr9b29x6jixvq97tb6ndlr17 == \2\6\s\8\a\a\o\2\i\k\c\s\i\q\i\m\s\4\o\v\l\9\p\k\p\0\f\y\u\v\0\j\8\7\t\k\8\h\b\k\p\g\r\r\c\j\x\q\0\e\c\u\l\d\h\3\6\a\w\q\e\w\7\g\z\6\0\9\k\e\o\v\o\x\z\6\0\k\6\8\9\a\3\n\o\w\4\5\n\a\o\z\i\f\w\i\c\g\p\1\2\a\j\e\s\i\3\p\v\f\o\1\7\5\l\5\m\i\a\6\n\4\2\m\p\l\c\b\k\z\3\b\f\6\d\d\n\a\b\l\j\o\j\6\4\p\r\v\q\b\u\0\e\m\u\t\l\2\7\h\g\3\o\b\o\g\v\s\8\z\l\b\o\5\6\j\m\r\2\6\p\4\1\l\r\4\n\n\0\e\c\b\e\7\x\c\m\k\k\8\7\q\g\w\f\z\d\d\p\x\f\g\0\m\t\o\o\d\5\d\a\5\7\j\r\u\v\3\z\w\l\c\v\6\9\p\7\v\g\e\u\5\w\i\7\a\8\g\2\8\p\2\s\w\1\q\s\b\e\v\1\o\g\2\c\c\q\q\z\r\e\9\x\n\x\u\q\v\p\n\l\l\p\g\y\1\t\1\l\m\g\5\z\i\f\a\f\i\8\t\y\x\0\9\w\h\h\v\n\v\m\z\n\6\u\b\e\z\6\3\7\y\8\q\x\g\l\e\r\w\k\f\n\o\3\9\3\y\h\m\o\g\y\t\8\b\z\3\b\n\u\t\e\n\5\n\l\m\d\7\q\l\2\5\o\0\d\w\8\y\g\z\l\e\d\n\b\j\p\6\g\u\h\y\d\y\p\i\t\a\v\7\j\u\t\w\5\e\c\c\e\p\8\y\o\p\4\d\r\o\k\w\b\9\n\n\t\1\m\i\6\x\v\v\n\k\1\i\s\8\8\g\p\t\f\m\m\b\z\o\d\5\2\u\k\c\0\n\2\i\l\k\b\x\j\9\j\i\q\1\e\q\0\v\e\l\s\t\8\o\r\h\8\i\a\0\v\y\t\6\m\l\h\d\k\h\a\z\w\r\9\b\2\9\x\6\j\i\x\v\q\9\7\t\b\6\n\d\l\r\1\7 ]] 00:15:36.140 19:17:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:36.140 19:17:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:15:36.140 [2024-10-17 19:17:45.293982] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:36.140 [2024-10-17 19:17:45.294090] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60534 ] 00:15:36.399 [2024-10-17 19:17:45.428577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.399 [2024-10-17 19:17:45.505756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.399 [2024-10-17 19:17:45.563123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:36.399  [2024-10-17T19:17:45.916Z] Copying: 512/512 [B] (average 500 kBps) 00:15:36.658 00:15:36.658 19:17:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 26s8aao2ikcsiqims4ovl9pkp0fyuv0j87tk8hbkpgrrcjxq0eculdh36awqew7gz609keovoxz60k689a3now45naozifwicgp12ajesi3pvfo175l5mia6n42mplcbkz3bf6ddnabljoj64prvqbu0emutl27hg3obogvs8zlbo56jmr26p41lr4nn0ecbe7xcmkk87qgwfzddpxfg0mtood5da57jruv3zwlcv69p7vgeu5wi7a8g28p2sw1qsbev1og2ccqqzre9xnxuqvpnllpgy1t1lmg5zifafi8tyx09whhvnvmzn6ubez637y8qxglerwkfno393yhmogyt8bz3bnuten5nlmd7ql25o0dw8ygzlednbjp6guhydypitav7jutw5eccep8yop4drokwb9nnt1mi6xvvnk1is88gptfmmbzod52ukc0n2ilkbxj9jiq1eq0velst8orh8ia0vyt6mlhdkhazwr9b29x6jixvq97tb6ndlr17 == \2\6\s\8\a\a\o\2\i\k\c\s\i\q\i\m\s\4\o\v\l\9\p\k\p\0\f\y\u\v\0\j\8\7\t\k\8\h\b\k\p\g\r\r\c\j\x\q\0\e\c\u\l\d\h\3\6\a\w\q\e\w\7\g\z\6\0\9\k\e\o\v\o\x\z\6\0\k\6\8\9\a\3\n\o\w\4\5\n\a\o\z\i\f\w\i\c\g\p\1\2\a\j\e\s\i\3\p\v\f\o\1\7\5\l\5\m\i\a\6\n\4\2\m\p\l\c\b\k\z\3\b\f\6\d\d\n\a\b\l\j\o\j\6\4\p\r\v\q\b\u\0\e\m\u\t\l\2\7\h\g\3\o\b\o\g\v\s\8\z\l\b\o\5\6\j\m\r\2\6\p\4\1\l\r\4\n\n\0\e\c\b\e\7\x\c\m\k\k\8\7\q\g\w\f\z\d\d\p\x\f\g\0\m\t\o\o\d\5\d\a\5\7\j\r\u\v\3\z\w\l\c\v\6\9\p\7\v\g\e\u\5\w\i\7\a\8\g\2\8\p\2\s\w\1\q\s\b\e\v\1\o\g\2\c\c\q\q\z\r\e\9\x\n\x\u\q\v\p\n\l\l\p\g\y\1\t\1\l\m\g\5\z\i\f\a\f\i\8\t\y\x\0\9\w\h\h\v\n\v\m\z\n\6\u\b\e\z\6\3\7\y\8\q\x\g\l\e\r\w\k\f\n\o\3\9\3\y\h\m\o\g\y\t\8\b\z\3\b\n\u\t\e\n\5\n\l\m\d\7\q\l\2\5\o\0\d\w\8\y\g\z\l\e\d\n\b\j\p\6\g\u\h\y\d\y\p\i\t\a\v\7\j\u\t\w\5\e\c\c\e\p\8\y\o\p\4\d\r\o\k\w\b\9\n\n\t\1\m\i\6\x\v\v\n\k\1\i\s\8\8\g\p\t\f\m\m\b\z\o\d\5\2\u\k\c\0\n\2\i\l\k\b\x\j\9\j\i\q\1\e\q\0\v\e\l\s\t\8\o\r\h\8\i\a\0\v\y\t\6\m\l\h\d\k\h\a\z\w\r\9\b\2\9\x\6\j\i\x\v\q\9\7\t\b\6\n\d\l\r\1\7 ]] 00:15:36.658 00:15:36.658 real 0m4.505s 00:15:36.658 user 0m2.451s 00:15:36.658 sys 0m2.234s 00:15:36.658 19:17:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:36.658 19:17:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:15:36.658 ************************************ 00:15:36.658 END TEST dd_flags_misc 00:15:36.658 ************************************ 00:15:36.658 19:17:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:15:36.658 19:17:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:15:36.658 * Second test run, disabling liburing, forcing AIO 00:15:36.658 19:17:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:15:36.658 19:17:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:15:36.658 19:17:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:36.658 19:17:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:36.658 19:17:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:15:36.658 ************************************ 00:15:36.658 START TEST dd_flag_append_forced_aio 00:15:36.658 ************************************ 00:15:36.658 19:17:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:15:36.658 19:17:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:15:36.658 19:17:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:15:36.658 19:17:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:15:36.658 19:17:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:15:36.658 19:17:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:36.658 19:17:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=wekspmnghoosn47oi0ld5ut69h8bjxk0 00:15:36.658 19:17:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:15:36.658 19:17:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:15:36.658 19:17:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:36.658 19:17:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=jmxgf94ijd3ycmkhdg6dd5boohjxjma2 00:15:36.658 19:17:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s wekspmnghoosn47oi0ld5ut69h8bjxk0 00:15:36.658 19:17:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s jmxgf94ijd3ycmkhdg6dd5boohjxjma2 00:15:36.658 19:17:45 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:15:36.917 [2024-10-17 19:17:45.919610] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:36.917 [2024-10-17 19:17:45.919720] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60562 ] 00:15:36.917 [2024-10-17 19:17:46.059825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.917 [2024-10-17 19:17:46.132171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.176 [2024-10-17 19:17:46.188666] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:37.176  [2024-10-17T19:17:46.693Z] Copying: 32/32 [B] (average 31 kBps) 00:15:37.435 00:15:37.435 19:17:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ jmxgf94ijd3ycmkhdg6dd5boohjxjma2wekspmnghoosn47oi0ld5ut69h8bjxk0 == \j\m\x\g\f\9\4\i\j\d\3\y\c\m\k\h\d\g\6\d\d\5\b\o\o\h\j\x\j\m\a\2\w\e\k\s\p\m\n\g\h\o\o\s\n\4\7\o\i\0\l\d\5\u\t\6\9\h\8\b\j\x\k\0 ]] 00:15:37.435 00:15:37.435 real 0m0.586s 00:15:37.435 user 0m0.304s 00:15:37.435 sys 0m0.159s 00:15:37.435 19:17:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:37.435 19:17:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:37.435 ************************************ 00:15:37.435 END TEST dd_flag_append_forced_aio 00:15:37.435 ************************************ 00:15:37.435 19:17:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:15:37.435 19:17:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:37.435 19:17:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:37.435 19:17:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:15:37.435 ************************************ 00:15:37.435 START TEST dd_flag_directory_forced_aio 00:15:37.435 ************************************ 00:15:37.435 19:17:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:15:37.435 19:17:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:37.435 19:17:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:15:37.435 19:17:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:37.435 19:17:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:37.435 19:17:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.435 19:17:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:37.435 19:17:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.435 19:17:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:37.435 19:17:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.435 19:17:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:37.435 19:17:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:37.435 19:17:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:37.435 [2024-10-17 19:17:46.562784] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:37.435 [2024-10-17 19:17:46.562898] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60589 ] 00:15:37.694 [2024-10-17 19:17:46.703972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.694 [2024-10-17 19:17:46.779353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.694 [2024-10-17 19:17:46.836530] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:37.694 [2024-10-17 19:17:46.876401] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:15:37.694 [2024-10-17 19:17:46.876470] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:15:37.694 [2024-10-17 19:17:46.876495] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:37.952 [2024-10-17 19:17:46.996995] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:15:37.952 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:15:37.952 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:37.952 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:15:37.952 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:15:37.952 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:15:37.952 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:37.952 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:15:37.952 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:15:37.952 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:15:37.952 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:37.952 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.952 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:37.952 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.952 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:37.953 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.953 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:37.953 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:37.953 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:15:37.953 [2024-10-17 19:17:47.136095] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:37.953 [2024-10-17 19:17:47.136249] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60598 ] 00:15:38.211 [2024-10-17 19:17:47.274187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.211 [2024-10-17 19:17:47.341681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.211 [2024-10-17 19:17:47.397028] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:38.211 [2024-10-17 19:17:47.433201] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:15:38.211 [2024-10-17 19:17:47.433270] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:15:38.211 [2024-10-17 19:17:47.433291] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:38.471 [2024-10-17 19:17:47.549316] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:15:38.471 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:15:38.471 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:38.471 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:15:38.471 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:15:38.471 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:15:38.471 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:38.471 00:15:38.471 real 0m1.123s 00:15:38.471 user 0m0.612s 00:15:38.471 sys 0m0.294s 00:15:38.471 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:38.471 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:38.471 ************************************ 00:15:38.471 END TEST dd_flag_directory_forced_aio 00:15:38.471 ************************************ 00:15:38.471 19:17:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:15:38.471 19:17:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:38.471 19:17:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:38.471 19:17:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:15:38.471 ************************************ 00:15:38.471 START TEST dd_flag_nofollow_forced_aio 00:15:38.471 ************************************ 00:15:38.471 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:15:38.471 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:15:38.471 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:15:38.471 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:15:38.471 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:15:38.471 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:38.471 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:15:38.471 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:38.471 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:38.471 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:38.471 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:38.471 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:38.471 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:38.471 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:38.471 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:38.471 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:38.471 19:17:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:38.730 [2024-10-17 19:17:47.749023] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:38.730 [2024-10-17 19:17:47.749155] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60627 ] 00:15:38.730 [2024-10-17 19:17:47.891562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.730 [2024-10-17 19:17:47.970782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.989 [2024-10-17 19:17:48.031485] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:38.989 [2024-10-17 19:17:48.071416] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:15:38.989 [2024-10-17 19:17:48.071491] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:15:38.989 [2024-10-17 19:17:48.071515] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:38.989 [2024-10-17 19:17:48.192148] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:15:39.250 19:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:15:39.250 19:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:39.250 19:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:15:39.250 19:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:15:39.250 19:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:15:39.250 19:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:39.250 19:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:15:39.250 19:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:15:39.250 19:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:15:39.250 19:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:39.250 19:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:39.250 19:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:39.250 19:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:39.250 19:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:39.250 19:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:39.250 19:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:39.250 19:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:15:39.250 19:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:15:39.250 [2024-10-17 19:17:48.335037] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:39.250 [2024-10-17 19:17:48.335171] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60636 ] 00:15:39.250 [2024-10-17 19:17:48.475380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.550 [2024-10-17 19:17:48.543163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.550 [2024-10-17 19:17:48.598589] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:39.550 [2024-10-17 19:17:48.636714] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:15:39.550 [2024-10-17 19:17:48.636797] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:15:39.550 [2024-10-17 19:17:48.636832] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:39.550 [2024-10-17 19:17:48.754184] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:15:39.812 19:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:15:39.812 19:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:39.812 19:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:15:39.812 19:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:15:39.812 19:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:15:39.812 19:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:39.812 19:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:15:39.812 19:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:15:39.812 19:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:39.812 19:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:39.812 [2024-10-17 19:17:48.897716] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:39.812 [2024-10-17 19:17:48.897856] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60644 ] 00:15:39.812 [2024-10-17 19:17:49.037472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.071 [2024-10-17 19:17:49.108533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.071 [2024-10-17 19:17:49.164939] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:40.071  [2024-10-17T19:17:49.590Z] Copying: 512/512 [B] (average 500 kBps) 00:15:40.332 00:15:40.332 19:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ u2rhftmu2af4azenfz6tq3koi5kzic4jsauxjs3gbcq7hi7kw9ek0eughbs1uzs94w8poj3lqsykbr7vt62xw74j424di7wzcdzlg0gj7yzzsy3dt03wgo0365cyihn0o7xd3ggahozvh5zmdksu824w6dj60oaafnhfk89ezgqkr12fw9bdjhnxea7sy87i33z4ygy6ya3x96v6yxdl8k42u49c61l9fmx33rp002kyd9gy6brmps9cjo3ge2i89u9fdpzjtcgrmq5sm64j6cd3gid9ly9xj5m06o1mb0ei7j2p5v24g4gatfeckeaakeqkcwneb4wv6osnelvxbl4osfspt5ztxflzk5brrhh38phtbzeurvw7bqnaz6vlup6k31dg6h6913p8vr0t3ap588i6ockgvfnrhx990lcu53qir8k01zqix1x4boibbt44l9spj1hp9f6m43r716h6sttxwqzjqxt9auoknwxkjnxtfbu85wpacmvk8tjv == \u\2\r\h\f\t\m\u\2\a\f\4\a\z\e\n\f\z\6\t\q\3\k\o\i\5\k\z\i\c\4\j\s\a\u\x\j\s\3\g\b\c\q\7\h\i\7\k\w\9\e\k\0\e\u\g\h\b\s\1\u\z\s\9\4\w\8\p\o\j\3\l\q\s\y\k\b\r\7\v\t\6\2\x\w\7\4\j\4\2\4\d\i\7\w\z\c\d\z\l\g\0\g\j\7\y\z\z\s\y\3\d\t\0\3\w\g\o\0\3\6\5\c\y\i\h\n\0\o\7\x\d\3\g\g\a\h\o\z\v\h\5\z\m\d\k\s\u\8\2\4\w\6\d\j\6\0\o\a\a\f\n\h\f\k\8\9\e\z\g\q\k\r\1\2\f\w\9\b\d\j\h\n\x\e\a\7\s\y\8\7\i\3\3\z\4\y\g\y\6\y\a\3\x\9\6\v\6\y\x\d\l\8\k\4\2\u\4\9\c\6\1\l\9\f\m\x\3\3\r\p\0\0\2\k\y\d\9\g\y\6\b\r\m\p\s\9\c\j\o\3\g\e\2\i\8\9\u\9\f\d\p\z\j\t\c\g\r\m\q\5\s\m\6\4\j\6\c\d\3\g\i\d\9\l\y\9\x\j\5\m\0\6\o\1\m\b\0\e\i\7\j\2\p\5\v\2\4\g\4\g\a\t\f\e\c\k\e\a\a\k\e\q\k\c\w\n\e\b\4\w\v\6\o\s\n\e\l\v\x\b\l\4\o\s\f\s\p\t\5\z\t\x\f\l\z\k\5\b\r\r\h\h\3\8\p\h\t\b\z\e\u\r\v\w\7\b\q\n\a\z\6\v\l\u\p\6\k\3\1\d\g\6\h\6\9\1\3\p\8\v\r\0\t\3\a\p\5\8\8\i\6\o\c\k\g\v\f\n\r\h\x\9\9\0\l\c\u\5\3\q\i\r\8\k\0\1\z\q\i\x\1\x\4\b\o\i\b\b\t\4\4\l\9\s\p\j\1\h\p\9\f\6\m\4\3\r\7\1\6\h\6\s\t\t\x\w\q\z\j\q\x\t\9\a\u\o\k\n\w\x\k\j\n\x\t\f\b\u\8\5\w\p\a\c\m\v\k\8\t\j\v ]] 00:15:40.332 00:15:40.332 real 0m1.755s 00:15:40.332 user 0m0.930s 00:15:40.332 sys 0m0.469s 00:15:40.332 19:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:40.332 19:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:40.332 ************************************ 00:15:40.332 END TEST dd_flag_nofollow_forced_aio 00:15:40.332 ************************************ 00:15:40.332 19:17:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:15:40.332 19:17:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:40.332 19:17:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:40.332 19:17:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:15:40.332 ************************************ 00:15:40.332 START TEST dd_flag_noatime_forced_aio 00:15:40.332 ************************************ 00:15:40.332 19:17:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:15:40.332 19:17:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:15:40.332 19:17:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:15:40.332 19:17:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:15:40.332 19:17:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:15:40.332 19:17:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:40.332 19:17:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:40.332 19:17:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1729192669 00:15:40.332 19:17:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:40.332 19:17:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1729192669 00:15:40.332 19:17:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:15:41.267 19:17:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:41.525 [2024-10-17 19:17:50.551990] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:41.525 [2024-10-17 19:17:50.552094] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60690 ] 00:15:41.525 [2024-10-17 19:17:50.683498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.525 [2024-10-17 19:17:50.756064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.782 [2024-10-17 19:17:50.810127] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:41.782  [2024-10-17T19:17:51.298Z] Copying: 512/512 [B] (average 500 kBps) 00:15:42.040 00:15:42.040 19:17:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:42.040 19:17:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1729192669 )) 00:15:42.040 19:17:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:42.040 19:17:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1729192669 )) 00:15:42.040 19:17:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:42.041 [2024-10-17 19:17:51.113794] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:42.041 [2024-10-17 19:17:51.113909] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60700 ] 00:15:42.041 [2024-10-17 19:17:51.250835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.299 [2024-10-17 19:17:51.318880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.299 [2024-10-17 19:17:51.372210] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:42.299  [2024-10-17T19:17:51.816Z] Copying: 512/512 [B] (average 500 kBps) 00:15:42.558 00:15:42.558 19:17:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:42.558 19:17:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1729192671 )) 00:15:42.558 00:15:42.558 real 0m2.132s 00:15:42.558 user 0m0.602s 00:15:42.558 sys 0m0.292s 00:15:42.558 19:17:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:42.558 19:17:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:42.558 ************************************ 00:15:42.558 END TEST dd_flag_noatime_forced_aio 00:15:42.558 ************************************ 00:15:42.558 19:17:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:15:42.558 19:17:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:42.558 19:17:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:42.558 19:17:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:15:42.558 ************************************ 00:15:42.558 START TEST dd_flags_misc_forced_aio 00:15:42.558 ************************************ 00:15:42.558 19:17:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:15:42.558 19:17:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:15:42.558 19:17:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:15:42.558 19:17:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:15:42.558 19:17:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:15:42.558 19:17:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:15:42.558 19:17:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:15:42.558 19:17:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:42.558 19:17:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:42.558 19:17:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:15:42.558 [2024-10-17 19:17:51.723536] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:42.558 [2024-10-17 19:17:51.723645] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60728 ] 00:15:42.816 [2024-10-17 19:17:51.856972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.816 [2024-10-17 19:17:51.923579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.816 [2024-10-17 19:17:51.976264] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:42.816  [2024-10-17T19:17:52.332Z] Copying: 512/512 [B] (average 500 kBps) 00:15:43.074 00:15:43.074 19:17:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ mqnjbeja58yznus30iikkejaelwsrqjiuxk6b9x0bjsdm3lj12rmaphvaop1n48v6oy7wn3m8zlkwh33ufpn361jcqg0siyg1u600gz8rgc02b715uyol54fieaclkiovdcpq06kp5js3xpgwljuhyok70e6rtc3szchi03ntrsoumz7onkgd6samhfqmihcocm1r5ejbtk0cyjozj5adeb1kzu2hsna5w2mdhytndhdjj15dquxwynyd9o6spp1sja8wx91z23qbwxldf2j3qsy9zrhmf1q92hh8f6j714hpxxcz6gbwjwy2fn34oiuc3ji85y8katu64cjgdvti7c4e62ua3946pf6l73s2jm1rht26ghc52qwa0zugz98qufr6fvjdafsg2q5ab1l0dkcbygdah8hub7wk14va1w3gxq0uqjxi01uvzyl4b5jtsi1924j11figa4cyd92jf8e4zrb7fy878r3vwgc0onxt6u4xae6f4t4hm0cgtdo == \m\q\n\j\b\e\j\a\5\8\y\z\n\u\s\3\0\i\i\k\k\e\j\a\e\l\w\s\r\q\j\i\u\x\k\6\b\9\x\0\b\j\s\d\m\3\l\j\1\2\r\m\a\p\h\v\a\o\p\1\n\4\8\v\6\o\y\7\w\n\3\m\8\z\l\k\w\h\3\3\u\f\p\n\3\6\1\j\c\q\g\0\s\i\y\g\1\u\6\0\0\g\z\8\r\g\c\0\2\b\7\1\5\u\y\o\l\5\4\f\i\e\a\c\l\k\i\o\v\d\c\p\q\0\6\k\p\5\j\s\3\x\p\g\w\l\j\u\h\y\o\k\7\0\e\6\r\t\c\3\s\z\c\h\i\0\3\n\t\r\s\o\u\m\z\7\o\n\k\g\d\6\s\a\m\h\f\q\m\i\h\c\o\c\m\1\r\5\e\j\b\t\k\0\c\y\j\o\z\j\5\a\d\e\b\1\k\z\u\2\h\s\n\a\5\w\2\m\d\h\y\t\n\d\h\d\j\j\1\5\d\q\u\x\w\y\n\y\d\9\o\6\s\p\p\1\s\j\a\8\w\x\9\1\z\2\3\q\b\w\x\l\d\f\2\j\3\q\s\y\9\z\r\h\m\f\1\q\9\2\h\h\8\f\6\j\7\1\4\h\p\x\x\c\z\6\g\b\w\j\w\y\2\f\n\3\4\o\i\u\c\3\j\i\8\5\y\8\k\a\t\u\6\4\c\j\g\d\v\t\i\7\c\4\e\6\2\u\a\3\9\4\6\p\f\6\l\7\3\s\2\j\m\1\r\h\t\2\6\g\h\c\5\2\q\w\a\0\z\u\g\z\9\8\q\u\f\r\6\f\v\j\d\a\f\s\g\2\q\5\a\b\1\l\0\d\k\c\b\y\g\d\a\h\8\h\u\b\7\w\k\1\4\v\a\1\w\3\g\x\q\0\u\q\j\x\i\0\1\u\v\z\y\l\4\b\5\j\t\s\i\1\9\2\4\j\1\1\f\i\g\a\4\c\y\d\9\2\j\f\8\e\4\z\r\b\7\f\y\8\7\8\r\3\v\w\g\c\0\o\n\x\t\6\u\4\x\a\e\6\f\4\t\4\h\m\0\c\g\t\d\o ]] 00:15:43.074 19:17:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:43.074 19:17:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:15:43.074 [2024-10-17 19:17:52.287392] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:43.074 [2024-10-17 19:17:52.287526] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60735 ] 00:15:43.336 [2024-10-17 19:17:52.422470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.336 [2024-10-17 19:17:52.489110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.336 [2024-10-17 19:17:52.542866] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:43.336  [2024-10-17T19:17:52.853Z] Copying: 512/512 [B] (average 500 kBps) 00:15:43.595 00:15:43.595 19:17:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ mqnjbeja58yznus30iikkejaelwsrqjiuxk6b9x0bjsdm3lj12rmaphvaop1n48v6oy7wn3m8zlkwh33ufpn361jcqg0siyg1u600gz8rgc02b715uyol54fieaclkiovdcpq06kp5js3xpgwljuhyok70e6rtc3szchi03ntrsoumz7onkgd6samhfqmihcocm1r5ejbtk0cyjozj5adeb1kzu2hsna5w2mdhytndhdjj15dquxwynyd9o6spp1sja8wx91z23qbwxldf2j3qsy9zrhmf1q92hh8f6j714hpxxcz6gbwjwy2fn34oiuc3ji85y8katu64cjgdvti7c4e62ua3946pf6l73s2jm1rht26ghc52qwa0zugz98qufr6fvjdafsg2q5ab1l0dkcbygdah8hub7wk14va1w3gxq0uqjxi01uvzyl4b5jtsi1924j11figa4cyd92jf8e4zrb7fy878r3vwgc0onxt6u4xae6f4t4hm0cgtdo == \m\q\n\j\b\e\j\a\5\8\y\z\n\u\s\3\0\i\i\k\k\e\j\a\e\l\w\s\r\q\j\i\u\x\k\6\b\9\x\0\b\j\s\d\m\3\l\j\1\2\r\m\a\p\h\v\a\o\p\1\n\4\8\v\6\o\y\7\w\n\3\m\8\z\l\k\w\h\3\3\u\f\p\n\3\6\1\j\c\q\g\0\s\i\y\g\1\u\6\0\0\g\z\8\r\g\c\0\2\b\7\1\5\u\y\o\l\5\4\f\i\e\a\c\l\k\i\o\v\d\c\p\q\0\6\k\p\5\j\s\3\x\p\g\w\l\j\u\h\y\o\k\7\0\e\6\r\t\c\3\s\z\c\h\i\0\3\n\t\r\s\o\u\m\z\7\o\n\k\g\d\6\s\a\m\h\f\q\m\i\h\c\o\c\m\1\r\5\e\j\b\t\k\0\c\y\j\o\z\j\5\a\d\e\b\1\k\z\u\2\h\s\n\a\5\w\2\m\d\h\y\t\n\d\h\d\j\j\1\5\d\q\u\x\w\y\n\y\d\9\o\6\s\p\p\1\s\j\a\8\w\x\9\1\z\2\3\q\b\w\x\l\d\f\2\j\3\q\s\y\9\z\r\h\m\f\1\q\9\2\h\h\8\f\6\j\7\1\4\h\p\x\x\c\z\6\g\b\w\j\w\y\2\f\n\3\4\o\i\u\c\3\j\i\8\5\y\8\k\a\t\u\6\4\c\j\g\d\v\t\i\7\c\4\e\6\2\u\a\3\9\4\6\p\f\6\l\7\3\s\2\j\m\1\r\h\t\2\6\g\h\c\5\2\q\w\a\0\z\u\g\z\9\8\q\u\f\r\6\f\v\j\d\a\f\s\g\2\q\5\a\b\1\l\0\d\k\c\b\y\g\d\a\h\8\h\u\b\7\w\k\1\4\v\a\1\w\3\g\x\q\0\u\q\j\x\i\0\1\u\v\z\y\l\4\b\5\j\t\s\i\1\9\2\4\j\1\1\f\i\g\a\4\c\y\d\9\2\j\f\8\e\4\z\r\b\7\f\y\8\7\8\r\3\v\w\g\c\0\o\n\x\t\6\u\4\x\a\e\6\f\4\t\4\h\m\0\c\g\t\d\o ]] 00:15:43.595 19:17:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:43.595 19:17:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:15:43.903 [2024-10-17 19:17:52.874872] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:43.903 [2024-10-17 19:17:52.875045] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60743 ] 00:15:43.903 [2024-10-17 19:17:53.022973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.903 [2024-10-17 19:17:53.092349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.165 [2024-10-17 19:17:53.146625] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:44.165  [2024-10-17T19:17:53.423Z] Copying: 512/512 [B] (average 250 kBps) 00:15:44.165 00:15:44.165 19:17:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ mqnjbeja58yznus30iikkejaelwsrqjiuxk6b9x0bjsdm3lj12rmaphvaop1n48v6oy7wn3m8zlkwh33ufpn361jcqg0siyg1u600gz8rgc02b715uyol54fieaclkiovdcpq06kp5js3xpgwljuhyok70e6rtc3szchi03ntrsoumz7onkgd6samhfqmihcocm1r5ejbtk0cyjozj5adeb1kzu2hsna5w2mdhytndhdjj15dquxwynyd9o6spp1sja8wx91z23qbwxldf2j3qsy9zrhmf1q92hh8f6j714hpxxcz6gbwjwy2fn34oiuc3ji85y8katu64cjgdvti7c4e62ua3946pf6l73s2jm1rht26ghc52qwa0zugz98qufr6fvjdafsg2q5ab1l0dkcbygdah8hub7wk14va1w3gxq0uqjxi01uvzyl4b5jtsi1924j11figa4cyd92jf8e4zrb7fy878r3vwgc0onxt6u4xae6f4t4hm0cgtdo == \m\q\n\j\b\e\j\a\5\8\y\z\n\u\s\3\0\i\i\k\k\e\j\a\e\l\w\s\r\q\j\i\u\x\k\6\b\9\x\0\b\j\s\d\m\3\l\j\1\2\r\m\a\p\h\v\a\o\p\1\n\4\8\v\6\o\y\7\w\n\3\m\8\z\l\k\w\h\3\3\u\f\p\n\3\6\1\j\c\q\g\0\s\i\y\g\1\u\6\0\0\g\z\8\r\g\c\0\2\b\7\1\5\u\y\o\l\5\4\f\i\e\a\c\l\k\i\o\v\d\c\p\q\0\6\k\p\5\j\s\3\x\p\g\w\l\j\u\h\y\o\k\7\0\e\6\r\t\c\3\s\z\c\h\i\0\3\n\t\r\s\o\u\m\z\7\o\n\k\g\d\6\s\a\m\h\f\q\m\i\h\c\o\c\m\1\r\5\e\j\b\t\k\0\c\y\j\o\z\j\5\a\d\e\b\1\k\z\u\2\h\s\n\a\5\w\2\m\d\h\y\t\n\d\h\d\j\j\1\5\d\q\u\x\w\y\n\y\d\9\o\6\s\p\p\1\s\j\a\8\w\x\9\1\z\2\3\q\b\w\x\l\d\f\2\j\3\q\s\y\9\z\r\h\m\f\1\q\9\2\h\h\8\f\6\j\7\1\4\h\p\x\x\c\z\6\g\b\w\j\w\y\2\f\n\3\4\o\i\u\c\3\j\i\8\5\y\8\k\a\t\u\6\4\c\j\g\d\v\t\i\7\c\4\e\6\2\u\a\3\9\4\6\p\f\6\l\7\3\s\2\j\m\1\r\h\t\2\6\g\h\c\5\2\q\w\a\0\z\u\g\z\9\8\q\u\f\r\6\f\v\j\d\a\f\s\g\2\q\5\a\b\1\l\0\d\k\c\b\y\g\d\a\h\8\h\u\b\7\w\k\1\4\v\a\1\w\3\g\x\q\0\u\q\j\x\i\0\1\u\v\z\y\l\4\b\5\j\t\s\i\1\9\2\4\j\1\1\f\i\g\a\4\c\y\d\9\2\j\f\8\e\4\z\r\b\7\f\y\8\7\8\r\3\v\w\g\c\0\o\n\x\t\6\u\4\x\a\e\6\f\4\t\4\h\m\0\c\g\t\d\o ]] 00:15:44.165 19:17:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:44.165 19:17:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:15:44.426 [2024-10-17 19:17:53.448425] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:44.426 [2024-10-17 19:17:53.448528] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60752 ] 00:15:44.426 [2024-10-17 19:17:53.578967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.426 [2024-10-17 19:17:53.644289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.686 [2024-10-17 19:17:53.697939] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:44.686  [2024-10-17T19:17:54.203Z] Copying: 512/512 [B] (average 250 kBps) 00:15:44.945 00:15:44.945 19:17:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ mqnjbeja58yznus30iikkejaelwsrqjiuxk6b9x0bjsdm3lj12rmaphvaop1n48v6oy7wn3m8zlkwh33ufpn361jcqg0siyg1u600gz8rgc02b715uyol54fieaclkiovdcpq06kp5js3xpgwljuhyok70e6rtc3szchi03ntrsoumz7onkgd6samhfqmihcocm1r5ejbtk0cyjozj5adeb1kzu2hsna5w2mdhytndhdjj15dquxwynyd9o6spp1sja8wx91z23qbwxldf2j3qsy9zrhmf1q92hh8f6j714hpxxcz6gbwjwy2fn34oiuc3ji85y8katu64cjgdvti7c4e62ua3946pf6l73s2jm1rht26ghc52qwa0zugz98qufr6fvjdafsg2q5ab1l0dkcbygdah8hub7wk14va1w3gxq0uqjxi01uvzyl4b5jtsi1924j11figa4cyd92jf8e4zrb7fy878r3vwgc0onxt6u4xae6f4t4hm0cgtdo == \m\q\n\j\b\e\j\a\5\8\y\z\n\u\s\3\0\i\i\k\k\e\j\a\e\l\w\s\r\q\j\i\u\x\k\6\b\9\x\0\b\j\s\d\m\3\l\j\1\2\r\m\a\p\h\v\a\o\p\1\n\4\8\v\6\o\y\7\w\n\3\m\8\z\l\k\w\h\3\3\u\f\p\n\3\6\1\j\c\q\g\0\s\i\y\g\1\u\6\0\0\g\z\8\r\g\c\0\2\b\7\1\5\u\y\o\l\5\4\f\i\e\a\c\l\k\i\o\v\d\c\p\q\0\6\k\p\5\j\s\3\x\p\g\w\l\j\u\h\y\o\k\7\0\e\6\r\t\c\3\s\z\c\h\i\0\3\n\t\r\s\o\u\m\z\7\o\n\k\g\d\6\s\a\m\h\f\q\m\i\h\c\o\c\m\1\r\5\e\j\b\t\k\0\c\y\j\o\z\j\5\a\d\e\b\1\k\z\u\2\h\s\n\a\5\w\2\m\d\h\y\t\n\d\h\d\j\j\1\5\d\q\u\x\w\y\n\y\d\9\o\6\s\p\p\1\s\j\a\8\w\x\9\1\z\2\3\q\b\w\x\l\d\f\2\j\3\q\s\y\9\z\r\h\m\f\1\q\9\2\h\h\8\f\6\j\7\1\4\h\p\x\x\c\z\6\g\b\w\j\w\y\2\f\n\3\4\o\i\u\c\3\j\i\8\5\y\8\k\a\t\u\6\4\c\j\g\d\v\t\i\7\c\4\e\6\2\u\a\3\9\4\6\p\f\6\l\7\3\s\2\j\m\1\r\h\t\2\6\g\h\c\5\2\q\w\a\0\z\u\g\z\9\8\q\u\f\r\6\f\v\j\d\a\f\s\g\2\q\5\a\b\1\l\0\d\k\c\b\y\g\d\a\h\8\h\u\b\7\w\k\1\4\v\a\1\w\3\g\x\q\0\u\q\j\x\i\0\1\u\v\z\y\l\4\b\5\j\t\s\i\1\9\2\4\j\1\1\f\i\g\a\4\c\y\d\9\2\j\f\8\e\4\z\r\b\7\f\y\8\7\8\r\3\v\w\g\c\0\o\n\x\t\6\u\4\x\a\e\6\f\4\t\4\h\m\0\c\g\t\d\o ]] 00:15:44.945 19:17:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:15:44.945 19:17:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:15:44.945 19:17:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:15:44.945 19:17:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:44.945 19:17:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:44.945 19:17:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:15:44.945 [2024-10-17 19:17:54.031082] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:44.945 [2024-10-17 19:17:54.031488] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60758 ] 00:15:44.945 [2024-10-17 19:17:54.171341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.202 [2024-10-17 19:17:54.244748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.202 [2024-10-17 19:17:54.301706] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:45.202  [2024-10-17T19:17:54.719Z] Copying: 512/512 [B] (average 500 kBps) 00:15:45.461 00:15:45.461 19:17:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ys3wf2r8xjk276buarreg3wb58aubw5yhpfajodyuop0vfpqkx1sgmi2m7fv371p3taspv6qs8t4by4rlh5gw5cprvddjr6jxyhp4i3itb7lswuvf50clc1uifiktk3hjebixyzjr1xc17pyu6cqojld8y561hgzn7m3hfsoyakbl200hv4ohnsdp1wzxnstgom0ot35c2sqv6wfqs3zndsffv48vbysnipeqcmg4rvlm4nb22iw3ym8fqgqos8lhz2rpd7abfh9v3xu3dcjjrekk5ssyjqd64fsh880ydpdlkpztfit25gshxyc9vweshdj8bxgxdo0hd60iiyyi4oux9jsj9t5a4qvyaknw0plz7r8qa0xw09ilmez3z6fnvn35l31szoukhftzvfcu2y5ylq7zaxo7whauqq5vyf6lfrvy9dgdzz6s5bd3opedzvro3dcrrp2dkstgx8fn1lebb5sbelr1nsixt2gc7b6lk9srwa8c5sylj1ka8t8 == \y\s\3\w\f\2\r\8\x\j\k\2\7\6\b\u\a\r\r\e\g\3\w\b\5\8\a\u\b\w\5\y\h\p\f\a\j\o\d\y\u\o\p\0\v\f\p\q\k\x\1\s\g\m\i\2\m\7\f\v\3\7\1\p\3\t\a\s\p\v\6\q\s\8\t\4\b\y\4\r\l\h\5\g\w\5\c\p\r\v\d\d\j\r\6\j\x\y\h\p\4\i\3\i\t\b\7\l\s\w\u\v\f\5\0\c\l\c\1\u\i\f\i\k\t\k\3\h\j\e\b\i\x\y\z\j\r\1\x\c\1\7\p\y\u\6\c\q\o\j\l\d\8\y\5\6\1\h\g\z\n\7\m\3\h\f\s\o\y\a\k\b\l\2\0\0\h\v\4\o\h\n\s\d\p\1\w\z\x\n\s\t\g\o\m\0\o\t\3\5\c\2\s\q\v\6\w\f\q\s\3\z\n\d\s\f\f\v\4\8\v\b\y\s\n\i\p\e\q\c\m\g\4\r\v\l\m\4\n\b\2\2\i\w\3\y\m\8\f\q\g\q\o\s\8\l\h\z\2\r\p\d\7\a\b\f\h\9\v\3\x\u\3\d\c\j\j\r\e\k\k\5\s\s\y\j\q\d\6\4\f\s\h\8\8\0\y\d\p\d\l\k\p\z\t\f\i\t\2\5\g\s\h\x\y\c\9\v\w\e\s\h\d\j\8\b\x\g\x\d\o\0\h\d\6\0\i\i\y\y\i\4\o\u\x\9\j\s\j\9\t\5\a\4\q\v\y\a\k\n\w\0\p\l\z\7\r\8\q\a\0\x\w\0\9\i\l\m\e\z\3\z\6\f\n\v\n\3\5\l\3\1\s\z\o\u\k\h\f\t\z\v\f\c\u\2\y\5\y\l\q\7\z\a\x\o\7\w\h\a\u\q\q\5\v\y\f\6\l\f\r\v\y\9\d\g\d\z\z\6\s\5\b\d\3\o\p\e\d\z\v\r\o\3\d\c\r\r\p\2\d\k\s\t\g\x\8\f\n\1\l\e\b\b\5\s\b\e\l\r\1\n\s\i\x\t\2\g\c\7\b\6\l\k\9\s\r\w\a\8\c\5\s\y\l\j\1\k\a\8\t\8 ]] 00:15:45.461 19:17:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:45.461 19:17:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:15:45.461 [2024-10-17 19:17:54.629027] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:45.461 [2024-10-17 19:17:54.629402] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60771 ] 00:15:45.720 [2024-10-17 19:17:54.766932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.720 [2024-10-17 19:17:54.841218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.720 [2024-10-17 19:17:54.894281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:45.720  [2024-10-17T19:17:55.238Z] Copying: 512/512 [B] (average 500 kBps) 00:15:45.980 00:15:45.980 19:17:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ys3wf2r8xjk276buarreg3wb58aubw5yhpfajodyuop0vfpqkx1sgmi2m7fv371p3taspv6qs8t4by4rlh5gw5cprvddjr6jxyhp4i3itb7lswuvf50clc1uifiktk3hjebixyzjr1xc17pyu6cqojld8y561hgzn7m3hfsoyakbl200hv4ohnsdp1wzxnstgom0ot35c2sqv6wfqs3zndsffv48vbysnipeqcmg4rvlm4nb22iw3ym8fqgqos8lhz2rpd7abfh9v3xu3dcjjrekk5ssyjqd64fsh880ydpdlkpztfit25gshxyc9vweshdj8bxgxdo0hd60iiyyi4oux9jsj9t5a4qvyaknw0plz7r8qa0xw09ilmez3z6fnvn35l31szoukhftzvfcu2y5ylq7zaxo7whauqq5vyf6lfrvy9dgdzz6s5bd3opedzvro3dcrrp2dkstgx8fn1lebb5sbelr1nsixt2gc7b6lk9srwa8c5sylj1ka8t8 == \y\s\3\w\f\2\r\8\x\j\k\2\7\6\b\u\a\r\r\e\g\3\w\b\5\8\a\u\b\w\5\y\h\p\f\a\j\o\d\y\u\o\p\0\v\f\p\q\k\x\1\s\g\m\i\2\m\7\f\v\3\7\1\p\3\t\a\s\p\v\6\q\s\8\t\4\b\y\4\r\l\h\5\g\w\5\c\p\r\v\d\d\j\r\6\j\x\y\h\p\4\i\3\i\t\b\7\l\s\w\u\v\f\5\0\c\l\c\1\u\i\f\i\k\t\k\3\h\j\e\b\i\x\y\z\j\r\1\x\c\1\7\p\y\u\6\c\q\o\j\l\d\8\y\5\6\1\h\g\z\n\7\m\3\h\f\s\o\y\a\k\b\l\2\0\0\h\v\4\o\h\n\s\d\p\1\w\z\x\n\s\t\g\o\m\0\o\t\3\5\c\2\s\q\v\6\w\f\q\s\3\z\n\d\s\f\f\v\4\8\v\b\y\s\n\i\p\e\q\c\m\g\4\r\v\l\m\4\n\b\2\2\i\w\3\y\m\8\f\q\g\q\o\s\8\l\h\z\2\r\p\d\7\a\b\f\h\9\v\3\x\u\3\d\c\j\j\r\e\k\k\5\s\s\y\j\q\d\6\4\f\s\h\8\8\0\y\d\p\d\l\k\p\z\t\f\i\t\2\5\g\s\h\x\y\c\9\v\w\e\s\h\d\j\8\b\x\g\x\d\o\0\h\d\6\0\i\i\y\y\i\4\o\u\x\9\j\s\j\9\t\5\a\4\q\v\y\a\k\n\w\0\p\l\z\7\r\8\q\a\0\x\w\0\9\i\l\m\e\z\3\z\6\f\n\v\n\3\5\l\3\1\s\z\o\u\k\h\f\t\z\v\f\c\u\2\y\5\y\l\q\7\z\a\x\o\7\w\h\a\u\q\q\5\v\y\f\6\l\f\r\v\y\9\d\g\d\z\z\6\s\5\b\d\3\o\p\e\d\z\v\r\o\3\d\c\r\r\p\2\d\k\s\t\g\x\8\f\n\1\l\e\b\b\5\s\b\e\l\r\1\n\s\i\x\t\2\g\c\7\b\6\l\k\9\s\r\w\a\8\c\5\s\y\l\j\1\k\a\8\t\8 ]] 00:15:45.980 19:17:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:45.980 19:17:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:15:45.980 [2024-10-17 19:17:55.191596] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:45.981 [2024-10-17 19:17:55.191973] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60773 ] 00:15:46.245 [2024-10-17 19:17:55.327899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.245 [2024-10-17 19:17:55.395545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.245 [2024-10-17 19:17:55.450879] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:46.245  [2024-10-17T19:17:55.761Z] Copying: 512/512 [B] (average 250 kBps) 00:15:46.503 00:15:46.503 19:17:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ys3wf2r8xjk276buarreg3wb58aubw5yhpfajodyuop0vfpqkx1sgmi2m7fv371p3taspv6qs8t4by4rlh5gw5cprvddjr6jxyhp4i3itb7lswuvf50clc1uifiktk3hjebixyzjr1xc17pyu6cqojld8y561hgzn7m3hfsoyakbl200hv4ohnsdp1wzxnstgom0ot35c2sqv6wfqs3zndsffv48vbysnipeqcmg4rvlm4nb22iw3ym8fqgqos8lhz2rpd7abfh9v3xu3dcjjrekk5ssyjqd64fsh880ydpdlkpztfit25gshxyc9vweshdj8bxgxdo0hd60iiyyi4oux9jsj9t5a4qvyaknw0plz7r8qa0xw09ilmez3z6fnvn35l31szoukhftzvfcu2y5ylq7zaxo7whauqq5vyf6lfrvy9dgdzz6s5bd3opedzvro3dcrrp2dkstgx8fn1lebb5sbelr1nsixt2gc7b6lk9srwa8c5sylj1ka8t8 == \y\s\3\w\f\2\r\8\x\j\k\2\7\6\b\u\a\r\r\e\g\3\w\b\5\8\a\u\b\w\5\y\h\p\f\a\j\o\d\y\u\o\p\0\v\f\p\q\k\x\1\s\g\m\i\2\m\7\f\v\3\7\1\p\3\t\a\s\p\v\6\q\s\8\t\4\b\y\4\r\l\h\5\g\w\5\c\p\r\v\d\d\j\r\6\j\x\y\h\p\4\i\3\i\t\b\7\l\s\w\u\v\f\5\0\c\l\c\1\u\i\f\i\k\t\k\3\h\j\e\b\i\x\y\z\j\r\1\x\c\1\7\p\y\u\6\c\q\o\j\l\d\8\y\5\6\1\h\g\z\n\7\m\3\h\f\s\o\y\a\k\b\l\2\0\0\h\v\4\o\h\n\s\d\p\1\w\z\x\n\s\t\g\o\m\0\o\t\3\5\c\2\s\q\v\6\w\f\q\s\3\z\n\d\s\f\f\v\4\8\v\b\y\s\n\i\p\e\q\c\m\g\4\r\v\l\m\4\n\b\2\2\i\w\3\y\m\8\f\q\g\q\o\s\8\l\h\z\2\r\p\d\7\a\b\f\h\9\v\3\x\u\3\d\c\j\j\r\e\k\k\5\s\s\y\j\q\d\6\4\f\s\h\8\8\0\y\d\p\d\l\k\p\z\t\f\i\t\2\5\g\s\h\x\y\c\9\v\w\e\s\h\d\j\8\b\x\g\x\d\o\0\h\d\6\0\i\i\y\y\i\4\o\u\x\9\j\s\j\9\t\5\a\4\q\v\y\a\k\n\w\0\p\l\z\7\r\8\q\a\0\x\w\0\9\i\l\m\e\z\3\z\6\f\n\v\n\3\5\l\3\1\s\z\o\u\k\h\f\t\z\v\f\c\u\2\y\5\y\l\q\7\z\a\x\o\7\w\h\a\u\q\q\5\v\y\f\6\l\f\r\v\y\9\d\g\d\z\z\6\s\5\b\d\3\o\p\e\d\z\v\r\o\3\d\c\r\r\p\2\d\k\s\t\g\x\8\f\n\1\l\e\b\b\5\s\b\e\l\r\1\n\s\i\x\t\2\g\c\7\b\6\l\k\9\s\r\w\a\8\c\5\s\y\l\j\1\k\a\8\t\8 ]] 00:15:46.503 19:17:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:15:46.503 19:17:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:15:46.761 [2024-10-17 19:17:55.766105] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:46.761 [2024-10-17 19:17:55.766244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60786 ] 00:15:46.761 [2024-10-17 19:17:55.904798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.761 [2024-10-17 19:17:55.971058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.020 [2024-10-17 19:17:56.024616] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:47.020  [2024-10-17T19:17:56.278Z] Copying: 512/512 [B] (average 500 kBps) 00:15:47.020 00:15:47.278 19:17:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ys3wf2r8xjk276buarreg3wb58aubw5yhpfajodyuop0vfpqkx1sgmi2m7fv371p3taspv6qs8t4by4rlh5gw5cprvddjr6jxyhp4i3itb7lswuvf50clc1uifiktk3hjebixyzjr1xc17pyu6cqojld8y561hgzn7m3hfsoyakbl200hv4ohnsdp1wzxnstgom0ot35c2sqv6wfqs3zndsffv48vbysnipeqcmg4rvlm4nb22iw3ym8fqgqos8lhz2rpd7abfh9v3xu3dcjjrekk5ssyjqd64fsh880ydpdlkpztfit25gshxyc9vweshdj8bxgxdo0hd60iiyyi4oux9jsj9t5a4qvyaknw0plz7r8qa0xw09ilmez3z6fnvn35l31szoukhftzvfcu2y5ylq7zaxo7whauqq5vyf6lfrvy9dgdzz6s5bd3opedzvro3dcrrp2dkstgx8fn1lebb5sbelr1nsixt2gc7b6lk9srwa8c5sylj1ka8t8 == \y\s\3\w\f\2\r\8\x\j\k\2\7\6\b\u\a\r\r\e\g\3\w\b\5\8\a\u\b\w\5\y\h\p\f\a\j\o\d\y\u\o\p\0\v\f\p\q\k\x\1\s\g\m\i\2\m\7\f\v\3\7\1\p\3\t\a\s\p\v\6\q\s\8\t\4\b\y\4\r\l\h\5\g\w\5\c\p\r\v\d\d\j\r\6\j\x\y\h\p\4\i\3\i\t\b\7\l\s\w\u\v\f\5\0\c\l\c\1\u\i\f\i\k\t\k\3\h\j\e\b\i\x\y\z\j\r\1\x\c\1\7\p\y\u\6\c\q\o\j\l\d\8\y\5\6\1\h\g\z\n\7\m\3\h\f\s\o\y\a\k\b\l\2\0\0\h\v\4\o\h\n\s\d\p\1\w\z\x\n\s\t\g\o\m\0\o\t\3\5\c\2\s\q\v\6\w\f\q\s\3\z\n\d\s\f\f\v\4\8\v\b\y\s\n\i\p\e\q\c\m\g\4\r\v\l\m\4\n\b\2\2\i\w\3\y\m\8\f\q\g\q\o\s\8\l\h\z\2\r\p\d\7\a\b\f\h\9\v\3\x\u\3\d\c\j\j\r\e\k\k\5\s\s\y\j\q\d\6\4\f\s\h\8\8\0\y\d\p\d\l\k\p\z\t\f\i\t\2\5\g\s\h\x\y\c\9\v\w\e\s\h\d\j\8\b\x\g\x\d\o\0\h\d\6\0\i\i\y\y\i\4\o\u\x\9\j\s\j\9\t\5\a\4\q\v\y\a\k\n\w\0\p\l\z\7\r\8\q\a\0\x\w\0\9\i\l\m\e\z\3\z\6\f\n\v\n\3\5\l\3\1\s\z\o\u\k\h\f\t\z\v\f\c\u\2\y\5\y\l\q\7\z\a\x\o\7\w\h\a\u\q\q\5\v\y\f\6\l\f\r\v\y\9\d\g\d\z\z\6\s\5\b\d\3\o\p\e\d\z\v\r\o\3\d\c\r\r\p\2\d\k\s\t\g\x\8\f\n\1\l\e\b\b\5\s\b\e\l\r\1\n\s\i\x\t\2\g\c\7\b\6\l\k\9\s\r\w\a\8\c\5\s\y\l\j\1\k\a\8\t\8 ]] 00:15:47.278 00:15:47.278 real 0m4.610s 00:15:47.278 user 0m2.493s 00:15:47.278 sys 0m1.122s 00:15:47.278 19:17:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:47.278 ************************************ 00:15:47.278 END TEST dd_flags_misc_forced_aio 00:15:47.278 19:17:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:15:47.278 ************************************ 00:15:47.278 19:17:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:15:47.278 19:17:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:15:47.278 19:17:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:15:47.278 ************************************ 00:15:47.278 END TEST spdk_dd_posix 00:15:47.278 ************************************ 00:15:47.278 00:15:47.278 real 0m21.025s 00:15:47.278 user 0m10.174s 00:15:47.278 sys 0m6.731s 00:15:47.278 19:17:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:47.278 19:17:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:15:47.278 19:17:56 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:15:47.278 19:17:56 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:47.278 19:17:56 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:47.278 19:17:56 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:15:47.278 ************************************ 00:15:47.278 START TEST spdk_dd_malloc 00:15:47.278 ************************************ 00:15:47.278 19:17:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:15:47.278 * Looking for test storage... 00:15:47.278 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:15:47.278 19:17:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:47.278 19:17:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # lcov --version 00:15:47.278 19:17:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:47.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.536 --rc genhtml_branch_coverage=1 00:15:47.536 --rc genhtml_function_coverage=1 00:15:47.536 --rc genhtml_legend=1 00:15:47.536 --rc geninfo_all_blocks=1 00:15:47.536 --rc geninfo_unexecuted_blocks=1 00:15:47.536 00:15:47.536 ' 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:47.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.536 --rc genhtml_branch_coverage=1 00:15:47.536 --rc genhtml_function_coverage=1 00:15:47.536 --rc genhtml_legend=1 00:15:47.536 --rc geninfo_all_blocks=1 00:15:47.536 --rc geninfo_unexecuted_blocks=1 00:15:47.536 00:15:47.536 ' 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:47.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.536 --rc genhtml_branch_coverage=1 00:15:47.536 --rc genhtml_function_coverage=1 00:15:47.536 --rc genhtml_legend=1 00:15:47.536 --rc geninfo_all_blocks=1 00:15:47.536 --rc geninfo_unexecuted_blocks=1 00:15:47.536 00:15:47.536 ' 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:47.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.536 --rc genhtml_branch_coverage=1 00:15:47.536 --rc genhtml_function_coverage=1 00:15:47.536 --rc genhtml_legend=1 00:15:47.536 --rc geninfo_all_blocks=1 00:15:47.536 --rc geninfo_unexecuted_blocks=1 00:15:47.536 00:15:47.536 ' 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.536 19:17:56 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.537 19:17:56 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.537 19:17:56 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.537 19:17:56 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:15:47.537 19:17:56 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.537 19:17:56 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:15:47.537 19:17:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:47.537 19:17:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:47.537 19:17:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:15:47.537 ************************************ 00:15:47.537 START TEST dd_malloc_copy 00:15:47.537 ************************************ 00:15:47.537 19:17:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:15:47.537 19:17:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:15:47.537 19:17:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:15:47.537 19:17:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:15:47.537 19:17:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:15:47.537 19:17:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:15:47.537 19:17:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:15:47.537 19:17:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:15:47.537 19:17:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:15:47.537 19:17:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:47.537 19:17:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:15:47.537 [2024-10-17 19:17:56.650018] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:47.537 [2024-10-17 19:17:56.650112] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60868 ] 00:15:47.537 { 00:15:47.537 "subsystems": [ 00:15:47.537 { 00:15:47.537 "subsystem": "bdev", 00:15:47.537 "config": [ 00:15:47.537 { 00:15:47.537 "params": { 00:15:47.537 "block_size": 512, 00:15:47.537 "num_blocks": 1048576, 00:15:47.537 "name": "malloc0" 00:15:47.537 }, 00:15:47.537 "method": "bdev_malloc_create" 00:15:47.537 }, 00:15:47.537 { 00:15:47.537 "params": { 00:15:47.537 "block_size": 512, 00:15:47.537 "num_blocks": 1048576, 00:15:47.537 "name": "malloc1" 00:15:47.537 }, 00:15:47.537 "method": "bdev_malloc_create" 00:15:47.537 }, 00:15:47.537 { 00:15:47.537 "method": "bdev_wait_for_examine" 00:15:47.537 } 00:15:47.537 ] 00:15:47.537 } 00:15:47.537 ] 00:15:47.537 } 00:15:47.537 [2024-10-17 19:17:56.787994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.795 [2024-10-17 19:17:56.863112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.795 [2024-10-17 19:17:56.921209] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:49.170  [2024-10-17T19:17:59.363Z] Copying: 196/512 [MB] (196 MBps) [2024-10-17T19:17:59.930Z] Copying: 392/512 [MB] (196 MBps) [2024-10-17T19:18:00.519Z] Copying: 512/512 [MB] (average 196 MBps) 00:15:51.261 00:15:51.261 19:18:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:15:51.261 19:18:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:15:51.261 19:18:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:51.261 19:18:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:15:51.261 { 00:15:51.261 "subsystems": [ 00:15:51.261 { 00:15:51.261 "subsystem": "bdev", 00:15:51.261 "config": [ 00:15:51.261 { 00:15:51.261 "params": { 00:15:51.261 "block_size": 512, 00:15:51.261 "num_blocks": 1048576, 00:15:51.261 "name": "malloc0" 00:15:51.261 }, 00:15:51.261 "method": "bdev_malloc_create" 00:15:51.261 }, 00:15:51.261 { 00:15:51.261 "params": { 00:15:51.261 "block_size": 512, 00:15:51.261 "num_blocks": 1048576, 00:15:51.261 "name": "malloc1" 00:15:51.261 }, 00:15:51.261 "method": "bdev_malloc_create" 00:15:51.261 }, 00:15:51.261 { 00:15:51.261 "method": "bdev_wait_for_examine" 00:15:51.261 } 00:15:51.261 ] 00:15:51.261 } 00:15:51.261 ] 00:15:51.261 } 00:15:51.549 [2024-10-17 19:18:00.520225] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:51.549 [2024-10-17 19:18:00.520323] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60910 ] 00:15:51.549 [2024-10-17 19:18:00.652076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.549 [2024-10-17 19:18:00.720216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.549 [2024-10-17 19:18:00.775285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:52.925  [2024-10-17T19:18:03.131Z] Copying: 194/512 [MB] (194 MBps) [2024-10-17T19:18:04.066Z] Copying: 395/512 [MB] (200 MBps) [2024-10-17T19:18:04.324Z] Copying: 512/512 [MB] (average 198 MBps) 00:15:55.066 00:15:55.066 ************************************ 00:15:55.066 END TEST dd_malloc_copy 00:15:55.066 ************************************ 00:15:55.066 00:15:55.066 real 0m7.664s 00:15:55.066 user 0m6.640s 00:15:55.066 sys 0m0.853s 00:15:55.066 19:18:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:55.066 19:18:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:15:55.066 ************************************ 00:15:55.066 END TEST spdk_dd_malloc 00:15:55.066 ************************************ 00:15:55.066 00:15:55.066 real 0m7.923s 00:15:55.066 user 0m6.786s 00:15:55.066 sys 0m0.966s 00:15:55.066 19:18:04 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:55.066 19:18:04 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:15:55.325 19:18:04 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:15:55.325 19:18:04 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:55.325 19:18:04 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:55.325 19:18:04 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:15:55.325 ************************************ 00:15:55.325 START TEST spdk_dd_bdev_to_bdev 00:15:55.325 ************************************ 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:15:55.325 * Looking for test storage... 00:15:55.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # lcov --version 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:55.325 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:55.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.326 --rc genhtml_branch_coverage=1 00:15:55.326 --rc genhtml_function_coverage=1 00:15:55.326 --rc genhtml_legend=1 00:15:55.326 --rc geninfo_all_blocks=1 00:15:55.326 --rc geninfo_unexecuted_blocks=1 00:15:55.326 00:15:55.326 ' 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:55.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.326 --rc genhtml_branch_coverage=1 00:15:55.326 --rc genhtml_function_coverage=1 00:15:55.326 --rc genhtml_legend=1 00:15:55.326 --rc geninfo_all_blocks=1 00:15:55.326 --rc geninfo_unexecuted_blocks=1 00:15:55.326 00:15:55.326 ' 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:55.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.326 --rc genhtml_branch_coverage=1 00:15:55.326 --rc genhtml_function_coverage=1 00:15:55.326 --rc genhtml_legend=1 00:15:55.326 --rc geninfo_all_blocks=1 00:15:55.326 --rc geninfo_unexecuted_blocks=1 00:15:55.326 00:15:55.326 ' 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:55.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.326 --rc genhtml_branch_coverage=1 00:15:55.326 --rc genhtml_function_coverage=1 00:15:55.326 --rc genhtml_legend=1 00:15:55.326 --rc geninfo_all_blocks=1 00:15:55.326 --rc geninfo_unexecuted_blocks=1 00:15:55.326 00:15:55.326 ' 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:15:55.326 ************************************ 00:15:55.326 START TEST dd_inflate_file 00:15:55.326 ************************************ 00:15:55.326 19:18:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:15:55.585 [2024-10-17 19:18:04.616234] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:55.585 [2024-10-17 19:18:04.616334] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61028 ] 00:15:55.585 [2024-10-17 19:18:04.752306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.585 [2024-10-17 19:18:04.825210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.843 [2024-10-17 19:18:04.883329] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:55.843  [2024-10-17T19:18:05.360Z] Copying: 64/64 [MB] (average 1454 MBps) 00:15:56.102 00:15:56.102 00:15:56.102 real 0m0.594s 00:15:56.102 user 0m0.340s 00:15:56.102 sys 0m0.317s 00:15:56.102 19:18:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:56.102 19:18:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:15:56.102 ************************************ 00:15:56.102 END TEST dd_inflate_file 00:15:56.102 ************************************ 00:15:56.102 19:18:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:15:56.102 19:18:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:15:56.102 19:18:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:15:56.102 19:18:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:15:56.102 19:18:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:15:56.102 19:18:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:15:56.102 19:18:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:15:56.102 19:18:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:56.102 19:18:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:15:56.102 ************************************ 00:15:56.102 START TEST dd_copy_to_out_bdev 00:15:56.102 ************************************ 00:15:56.102 19:18:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:15:56.102 { 00:15:56.102 "subsystems": [ 00:15:56.102 { 00:15:56.102 "subsystem": "bdev", 00:15:56.102 "config": [ 00:15:56.102 { 00:15:56.102 "params": { 00:15:56.102 "trtype": "pcie", 00:15:56.102 "traddr": "0000:00:10.0", 00:15:56.102 "name": "Nvme0" 00:15:56.102 }, 00:15:56.102 "method": "bdev_nvme_attach_controller" 00:15:56.102 }, 00:15:56.102 { 00:15:56.102 "params": { 00:15:56.102 "trtype": "pcie", 00:15:56.102 "traddr": "0000:00:11.0", 00:15:56.102 "name": "Nvme1" 00:15:56.102 }, 00:15:56.102 "method": "bdev_nvme_attach_controller" 00:15:56.102 }, 00:15:56.102 { 00:15:56.102 "method": "bdev_wait_for_examine" 00:15:56.102 } 00:15:56.102 ] 00:15:56.102 } 00:15:56.102 ] 00:15:56.102 } 00:15:56.102 [2024-10-17 19:18:05.275668] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:56.102 [2024-10-17 19:18:05.275770] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61067 ] 00:15:56.361 [2024-10-17 19:18:05.412904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.361 [2024-10-17 19:18:05.485190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.361 [2024-10-17 19:18:05.543072] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:57.738  [2024-10-17T19:18:06.996Z] Copying: 60/64 [MB] (60 MBps) [2024-10-17T19:18:07.254Z] Copying: 64/64 [MB] (average 60 MBps) 00:15:57.996 00:15:57.996 00:15:57.996 real 0m1.822s 00:15:57.996 user 0m1.579s 00:15:57.996 sys 0m1.440s 00:15:57.997 19:18:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:57.997 ************************************ 00:15:57.997 END TEST dd_copy_to_out_bdev 00:15:57.997 ************************************ 00:15:57.997 19:18:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:15:57.997 19:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:15:57.997 19:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:15:57.997 19:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:57.997 19:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:57.997 19:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:15:57.997 ************************************ 00:15:57.997 START TEST dd_offset_magic 00:15:57.997 ************************************ 00:15:57.997 19:18:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:15:57.997 19:18:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:15:57.997 19:18:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:15:57.997 19:18:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:15:57.997 19:18:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:15:57.997 19:18:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:15:57.997 19:18:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:15:57.997 19:18:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:15:57.997 19:18:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:15:57.997 [2024-10-17 19:18:07.150198] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:57.997 [2024-10-17 19:18:07.150289] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61112 ] 00:15:57.997 { 00:15:57.997 "subsystems": [ 00:15:57.997 { 00:15:57.997 "subsystem": "bdev", 00:15:57.997 "config": [ 00:15:57.997 { 00:15:57.997 "params": { 00:15:57.997 "trtype": "pcie", 00:15:57.997 "traddr": "0000:00:10.0", 00:15:57.997 "name": "Nvme0" 00:15:57.997 }, 00:15:57.997 "method": "bdev_nvme_attach_controller" 00:15:57.997 }, 00:15:57.997 { 00:15:57.997 "params": { 00:15:57.997 "trtype": "pcie", 00:15:57.997 "traddr": "0000:00:11.0", 00:15:57.997 "name": "Nvme1" 00:15:57.997 }, 00:15:57.997 "method": "bdev_nvme_attach_controller" 00:15:57.997 }, 00:15:57.997 { 00:15:57.997 "method": "bdev_wait_for_examine" 00:15:57.997 } 00:15:57.997 ] 00:15:57.997 } 00:15:57.997 ] 00:15:57.997 } 00:15:58.255 [2024-10-17 19:18:07.288929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.255 [2024-10-17 19:18:07.356220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.255 [2024-10-17 19:18:07.410713] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:58.513  [2024-10-17T19:18:08.029Z] Copying: 65/65 [MB] (average 915 MBps) 00:15:58.771 00:15:58.771 19:18:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:15:58.771 19:18:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:15:58.772 19:18:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:15:58.772 19:18:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:15:58.772 { 00:15:58.772 "subsystems": [ 00:15:58.772 { 00:15:58.772 "subsystem": "bdev", 00:15:58.772 "config": [ 00:15:58.772 { 00:15:58.772 "params": { 00:15:58.772 "trtype": "pcie", 00:15:58.772 "traddr": "0000:00:10.0", 00:15:58.772 "name": "Nvme0" 00:15:58.772 }, 00:15:58.772 "method": "bdev_nvme_attach_controller" 00:15:58.772 }, 00:15:58.772 { 00:15:58.772 "params": { 00:15:58.772 "trtype": "pcie", 00:15:58.772 "traddr": "0000:00:11.0", 00:15:58.772 "name": "Nvme1" 00:15:58.772 }, 00:15:58.772 "method": "bdev_nvme_attach_controller" 00:15:58.772 }, 00:15:58.772 { 00:15:58.772 "method": "bdev_wait_for_examine" 00:15:58.772 } 00:15:58.772 ] 00:15:58.772 } 00:15:58.772 ] 00:15:58.772 } 00:15:58.772 [2024-10-17 19:18:07.955545] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:58.772 [2024-10-17 19:18:07.955652] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61131 ] 00:15:59.030 [2024-10-17 19:18:08.091861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.030 [2024-10-17 19:18:08.156594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.030 [2024-10-17 19:18:08.212214] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:59.289  [2024-10-17T19:18:08.805Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:15:59.547 00:15:59.547 19:18:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:15:59.547 19:18:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:15:59.547 19:18:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:15:59.547 19:18:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:15:59.547 19:18:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:15:59.547 19:18:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:15:59.547 19:18:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:15:59.547 [2024-10-17 19:18:08.658466] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:15:59.547 { 00:15:59.547 "subsystems": [ 00:15:59.547 { 00:15:59.547 "subsystem": "bdev", 00:15:59.547 "config": [ 00:15:59.547 { 00:15:59.547 "params": { 00:15:59.547 "trtype": "pcie", 00:15:59.547 "traddr": "0000:00:10.0", 00:15:59.547 "name": "Nvme0" 00:15:59.547 }, 00:15:59.547 "method": "bdev_nvme_attach_controller" 00:15:59.547 }, 00:15:59.547 { 00:15:59.547 "params": { 00:15:59.547 "trtype": "pcie", 00:15:59.547 "traddr": "0000:00:11.0", 00:15:59.547 "name": "Nvme1" 00:15:59.547 }, 00:15:59.547 "method": "bdev_nvme_attach_controller" 00:15:59.547 }, 00:15:59.547 { 00:15:59.547 "method": "bdev_wait_for_examine" 00:15:59.547 } 00:15:59.547 ] 00:15:59.547 } 00:15:59.547 ] 00:15:59.547 } 00:15:59.547 [2024-10-17 19:18:08.658984] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61143 ] 00:15:59.547 [2024-10-17 19:18:08.797594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.805 [2024-10-17 19:18:08.870179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.805 [2024-10-17 19:18:08.928366] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:00.063  [2024-10-17T19:18:09.580Z] Copying: 65/65 [MB] (average 1000 MBps) 00:16:00.322 00:16:00.322 19:18:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:16:00.322 19:18:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:16:00.322 19:18:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:16:00.322 19:18:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:16:00.322 [2024-10-17 19:18:09.477628] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:16:00.322 [2024-10-17 19:18:09.477746] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61163 ] 00:16:00.322 { 00:16:00.322 "subsystems": [ 00:16:00.322 { 00:16:00.322 "subsystem": "bdev", 00:16:00.322 "config": [ 00:16:00.322 { 00:16:00.322 "params": { 00:16:00.322 "trtype": "pcie", 00:16:00.322 "traddr": "0000:00:10.0", 00:16:00.322 "name": "Nvme0" 00:16:00.322 }, 00:16:00.322 "method": "bdev_nvme_attach_controller" 00:16:00.322 }, 00:16:00.322 { 00:16:00.322 "params": { 00:16:00.322 "trtype": "pcie", 00:16:00.322 "traddr": "0000:00:11.0", 00:16:00.322 "name": "Nvme1" 00:16:00.322 }, 00:16:00.322 "method": "bdev_nvme_attach_controller" 00:16:00.322 }, 00:16:00.322 { 00:16:00.322 "method": "bdev_wait_for_examine" 00:16:00.322 } 00:16:00.322 ] 00:16:00.322 } 00:16:00.322 ] 00:16:00.322 } 00:16:00.579 [2024-10-17 19:18:09.612771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.579 [2024-10-17 19:18:09.679801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.579 [2024-10-17 19:18:09.736024] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:00.837  [2024-10-17T19:18:10.353Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:16:01.095 00:16:01.095 19:18:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:16:01.095 19:18:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:16:01.095 00:16:01.095 real 0m3.036s 00:16:01.095 user 0m2.153s 00:16:01.095 sys 0m0.958s 00:16:01.095 19:18:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:01.095 ************************************ 00:16:01.095 END TEST dd_offset_magic 00:16:01.095 ************************************ 00:16:01.095 19:18:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:16:01.095 19:18:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:16:01.095 19:18:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:16:01.095 19:18:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:16:01.095 19:18:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:16:01.095 19:18:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:16:01.095 19:18:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:16:01.095 19:18:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:16:01.095 19:18:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:16:01.095 19:18:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:16:01.095 19:18:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:16:01.095 19:18:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:16:01.095 [2024-10-17 19:18:10.222043] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:16:01.095 [2024-10-17 19:18:10.222327] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61200 ] 00:16:01.095 { 00:16:01.095 "subsystems": [ 00:16:01.095 { 00:16:01.095 "subsystem": "bdev", 00:16:01.095 "config": [ 00:16:01.095 { 00:16:01.095 "params": { 00:16:01.095 "trtype": "pcie", 00:16:01.095 "traddr": "0000:00:10.0", 00:16:01.095 "name": "Nvme0" 00:16:01.095 }, 00:16:01.095 "method": "bdev_nvme_attach_controller" 00:16:01.095 }, 00:16:01.095 { 00:16:01.095 "params": { 00:16:01.095 "trtype": "pcie", 00:16:01.095 "traddr": "0000:00:11.0", 00:16:01.095 "name": "Nvme1" 00:16:01.095 }, 00:16:01.095 "method": "bdev_nvme_attach_controller" 00:16:01.095 }, 00:16:01.095 { 00:16:01.095 "method": "bdev_wait_for_examine" 00:16:01.095 } 00:16:01.095 ] 00:16:01.095 } 00:16:01.095 ] 00:16:01.095 } 00:16:01.364 [2024-10-17 19:18:10.357387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.364 [2024-10-17 19:18:10.433396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.364 [2024-10-17 19:18:10.492493] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:01.623  [2024-10-17T19:18:11.139Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:16:01.881 00:16:01.881 19:18:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:16:01.881 19:18:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:16:01.881 19:18:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:16:01.881 19:18:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:16:01.881 19:18:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:16:01.881 19:18:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:16:01.881 19:18:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:16:01.881 19:18:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:16:01.881 19:18:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:16:01.881 19:18:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:16:01.881 [2024-10-17 19:18:10.941809] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:16:01.881 [2024-10-17 19:18:10.941917] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61216 ] 00:16:01.881 { 00:16:01.881 "subsystems": [ 00:16:01.881 { 00:16:01.881 "subsystem": "bdev", 00:16:01.881 "config": [ 00:16:01.881 { 00:16:01.881 "params": { 00:16:01.881 "trtype": "pcie", 00:16:01.881 "traddr": "0000:00:10.0", 00:16:01.881 "name": "Nvme0" 00:16:01.881 }, 00:16:01.881 "method": "bdev_nvme_attach_controller" 00:16:01.881 }, 00:16:01.881 { 00:16:01.881 "params": { 00:16:01.881 "trtype": "pcie", 00:16:01.881 "traddr": "0000:00:11.0", 00:16:01.881 "name": "Nvme1" 00:16:01.881 }, 00:16:01.881 "method": "bdev_nvme_attach_controller" 00:16:01.881 }, 00:16:01.881 { 00:16:01.881 "method": "bdev_wait_for_examine" 00:16:01.881 } 00:16:01.881 ] 00:16:01.881 } 00:16:01.881 ] 00:16:01.881 } 00:16:01.881 [2024-10-17 19:18:11.081072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.139 [2024-10-17 19:18:11.165351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.139 [2024-10-17 19:18:11.219511] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:02.397  [2024-10-17T19:18:11.655Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:16:02.397 00:16:02.397 19:18:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:16:02.397 ************************************ 00:16:02.397 END TEST spdk_dd_bdev_to_bdev 00:16:02.397 ************************************ 00:16:02.397 00:16:02.397 real 0m7.262s 00:16:02.397 user 0m5.240s 00:16:02.397 sys 0m3.455s 00:16:02.397 19:18:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:02.397 19:18:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:16:02.656 19:18:11 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:16:02.656 19:18:11 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:16:02.656 19:18:11 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:02.656 19:18:11 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:02.656 19:18:11 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:16:02.656 ************************************ 00:16:02.656 START TEST spdk_dd_uring 00:16:02.656 ************************************ 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:16:02.656 * Looking for test storage... 00:16:02.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # lcov --version 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:02.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.656 --rc genhtml_branch_coverage=1 00:16:02.656 --rc genhtml_function_coverage=1 00:16:02.656 --rc genhtml_legend=1 00:16:02.656 --rc geninfo_all_blocks=1 00:16:02.656 --rc geninfo_unexecuted_blocks=1 00:16:02.656 00:16:02.656 ' 00:16:02.656 19:18:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:02.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.657 --rc genhtml_branch_coverage=1 00:16:02.657 --rc genhtml_function_coverage=1 00:16:02.657 --rc genhtml_legend=1 00:16:02.657 --rc geninfo_all_blocks=1 00:16:02.657 --rc geninfo_unexecuted_blocks=1 00:16:02.657 00:16:02.657 ' 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:02.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.657 --rc genhtml_branch_coverage=1 00:16:02.657 --rc genhtml_function_coverage=1 00:16:02.657 --rc genhtml_legend=1 00:16:02.657 --rc geninfo_all_blocks=1 00:16:02.657 --rc geninfo_unexecuted_blocks=1 00:16:02.657 00:16:02.657 ' 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:02.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.657 --rc genhtml_branch_coverage=1 00:16:02.657 --rc genhtml_function_coverage=1 00:16:02.657 --rc genhtml_legend=1 00:16:02.657 --rc geninfo_all_blocks=1 00:16:02.657 --rc geninfo_unexecuted_blocks=1 00:16:02.657 00:16:02.657 ' 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:16:02.657 ************************************ 00:16:02.657 START TEST dd_uring_copy 00:16:02.657 ************************************ 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # uring_zram_copy 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:16:02.657 19:18:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=dnsr8k1w5o91x8xzaxuy8fljd6a0lullau0jfiz7nfe8fw9nvzagy4is1qq9oknb2lbkbsilhgwli1nqakvkwdbivumjqrksn2bctyxeq6lnpi8sojzh37nbyotbhu5fk94vg8h0h8quyaw2e59q8isrfgxfee1o19ho1nttv2ukq5bbqygdzp7cdmzc9lkykwksdct5m8ea5ko8ao1z7k8bavucwyljlewo6xrk27vcyvfyl69zcyt0ffw5n4ud174dxoc2tp9gixrzz1do2odl27fsr9k9l3xaw3368xzjmjb1a8cbbefrzi9e7472jn8bpqgz9z4wz10ikw26aj25kz719o18zo8wkxvbsi4jlo89h862ot0j83dwd9ipk41j1qeo0jtreu1kk5u3cddo1rts6uwdi48nfu32ffu0xsrmhgtykka3qpezdxe3nx27eir2psfufcdbzvvtq8olzndoww45qkyejii7ugjorgufw8gmbrhmjz6cg7ygsef2k2i33aoc87oneyo1jjgl7jqtw6nks7y4bgc93gfphewdaq0k95rfs1xqtvn3jffsh9lx2jg668oosqdsde011a2dlsu57cb36aclszkx9qrmmkmi1j12zlx3yv5hbjh81ievq0ajitf1w0mnj6rws7odeu8nyh77hszcyr4o0gxqczmz6247pm6yd2nlplp488l0r8ufckc5gnhmcz7y66efiaq6chln4kzhir59mse3q8jlwdil2c6skfvjqrlgw3183ggrq0i1isjreuu3pc0aouhme7em3nxlclr9nlg94aiw89hcg1q2or3o5x3getbqigj81z31qtb83nveee9pddlbig6km1kb21v3jxhh6a5y6c92qaq5w455g4zkdoqu30k1gi23okpvquqmpzadtxhc4bfvr0yiziz4xrhgwal51vjrpl5p3hu8cbu4ib3pns3ts1deup29kgr0coki2yktjgc3stevg3cby8u2kvkowxl7cx1111lm 00:16:02.658 19:18:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo dnsr8k1w5o91x8xzaxuy8fljd6a0lullau0jfiz7nfe8fw9nvzagy4is1qq9oknb2lbkbsilhgwli1nqakvkwdbivumjqrksn2bctyxeq6lnpi8sojzh37nbyotbhu5fk94vg8h0h8quyaw2e59q8isrfgxfee1o19ho1nttv2ukq5bbqygdzp7cdmzc9lkykwksdct5m8ea5ko8ao1z7k8bavucwyljlewo6xrk27vcyvfyl69zcyt0ffw5n4ud174dxoc2tp9gixrzz1do2odl27fsr9k9l3xaw3368xzjmjb1a8cbbefrzi9e7472jn8bpqgz9z4wz10ikw26aj25kz719o18zo8wkxvbsi4jlo89h862ot0j83dwd9ipk41j1qeo0jtreu1kk5u3cddo1rts6uwdi48nfu32ffu0xsrmhgtykka3qpezdxe3nx27eir2psfufcdbzvvtq8olzndoww45qkyejii7ugjorgufw8gmbrhmjz6cg7ygsef2k2i33aoc87oneyo1jjgl7jqtw6nks7y4bgc93gfphewdaq0k95rfs1xqtvn3jffsh9lx2jg668oosqdsde011a2dlsu57cb36aclszkx9qrmmkmi1j12zlx3yv5hbjh81ievq0ajitf1w0mnj6rws7odeu8nyh77hszcyr4o0gxqczmz6247pm6yd2nlplp488l0r8ufckc5gnhmcz7y66efiaq6chln4kzhir59mse3q8jlwdil2c6skfvjqrlgw3183ggrq0i1isjreuu3pc0aouhme7em3nxlclr9nlg94aiw89hcg1q2or3o5x3getbqigj81z31qtb83nveee9pddlbig6km1kb21v3jxhh6a5y6c92qaq5w455g4zkdoqu30k1gi23okpvquqmpzadtxhc4bfvr0yiziz4xrhgwal51vjrpl5p3hu8cbu4ib3pns3ts1deup29kgr0coki2yktjgc3stevg3cby8u2kvkowxl7cx1111lm 00:16:02.658 19:18:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:16:02.916 [2024-10-17 19:18:11.920681] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:16:02.916 [2024-10-17 19:18:11.920971] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61294 ] 00:16:02.916 [2024-10-17 19:18:12.052461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.916 [2024-10-17 19:18:12.125849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.173 [2024-10-17 19:18:12.185115] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:03.739  [2024-10-17T19:18:13.563Z] Copying: 511/511 [MB] (average 941 MBps) 00:16:04.305 00:16:04.305 19:18:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:16:04.305 19:18:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:16:04.305 19:18:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:16:04.306 19:18:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:16:04.306 [2024-10-17 19:18:13.543328] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:16:04.306 [2024-10-17 19:18:13.543439] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61315 ] 00:16:04.306 { 00:16:04.306 "subsystems": [ 00:16:04.306 { 00:16:04.306 "subsystem": "bdev", 00:16:04.306 "config": [ 00:16:04.306 { 00:16:04.306 "params": { 00:16:04.306 "block_size": 512, 00:16:04.306 "num_blocks": 1048576, 00:16:04.306 "name": "malloc0" 00:16:04.306 }, 00:16:04.306 "method": "bdev_malloc_create" 00:16:04.306 }, 00:16:04.306 { 00:16:04.306 "params": { 00:16:04.306 "filename": "/dev/zram1", 00:16:04.306 "name": "uring0" 00:16:04.306 }, 00:16:04.306 "method": "bdev_uring_create" 00:16:04.306 }, 00:16:04.306 { 00:16:04.306 "method": "bdev_wait_for_examine" 00:16:04.306 } 00:16:04.306 ] 00:16:04.306 } 00:16:04.306 ] 00:16:04.306 } 00:16:04.563 [2024-10-17 19:18:13.680646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.563 [2024-10-17 19:18:13.752553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.820 [2024-10-17 19:18:13.828929] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:06.193  [2024-10-17T19:18:16.383Z] Copying: 209/512 [MB] (209 MBps) [2024-10-17T19:18:16.643Z] Copying: 420/512 [MB] (211 MBps) [2024-10-17T19:18:17.243Z] Copying: 512/512 [MB] (average 210 MBps) 00:16:07.985 00:16:07.985 19:18:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:16:07.985 19:18:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:16:07.985 19:18:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:16:07.985 19:18:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:16:07.985 { 00:16:07.985 "subsystems": [ 00:16:07.985 { 00:16:07.985 "subsystem": "bdev", 00:16:07.985 "config": [ 00:16:07.985 { 00:16:07.985 "params": { 00:16:07.985 "block_size": 512, 00:16:07.985 "num_blocks": 1048576, 00:16:07.985 "name": "malloc0" 00:16:07.985 }, 00:16:07.985 "method": "bdev_malloc_create" 00:16:07.985 }, 00:16:07.985 { 00:16:07.985 "params": { 00:16:07.985 "filename": "/dev/zram1", 00:16:07.985 "name": "uring0" 00:16:07.985 }, 00:16:07.985 "method": "bdev_uring_create" 00:16:07.985 }, 00:16:07.985 { 00:16:07.985 "method": "bdev_wait_for_examine" 00:16:07.985 } 00:16:07.985 ] 00:16:07.985 } 00:16:07.985 ] 00:16:07.985 } 00:16:07.985 [2024-10-17 19:18:17.128078] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:16:07.985 [2024-10-17 19:18:17.128255] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61365 ] 00:16:08.243 [2024-10-17 19:18:17.274735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.243 [2024-10-17 19:18:17.346671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.243 [2024-10-17 19:18:17.419658] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:09.616  [2024-10-17T19:18:19.820Z] Copying: 163/512 [MB] (163 MBps) [2024-10-17T19:18:20.798Z] Copying: 328/512 [MB] (164 MBps) [2024-10-17T19:18:21.056Z] Copying: 485/512 [MB] (157 MBps) [2024-10-17T19:18:21.314Z] Copying: 512/512 [MB] (average 162 MBps) 00:16:12.056 00:16:12.056 19:18:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:16:12.056 19:18:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ dnsr8k1w5o91x8xzaxuy8fljd6a0lullau0jfiz7nfe8fw9nvzagy4is1qq9oknb2lbkbsilhgwli1nqakvkwdbivumjqrksn2bctyxeq6lnpi8sojzh37nbyotbhu5fk94vg8h0h8quyaw2e59q8isrfgxfee1o19ho1nttv2ukq5bbqygdzp7cdmzc9lkykwksdct5m8ea5ko8ao1z7k8bavucwyljlewo6xrk27vcyvfyl69zcyt0ffw5n4ud174dxoc2tp9gixrzz1do2odl27fsr9k9l3xaw3368xzjmjb1a8cbbefrzi9e7472jn8bpqgz9z4wz10ikw26aj25kz719o18zo8wkxvbsi4jlo89h862ot0j83dwd9ipk41j1qeo0jtreu1kk5u3cddo1rts6uwdi48nfu32ffu0xsrmhgtykka3qpezdxe3nx27eir2psfufcdbzvvtq8olzndoww45qkyejii7ugjorgufw8gmbrhmjz6cg7ygsef2k2i33aoc87oneyo1jjgl7jqtw6nks7y4bgc93gfphewdaq0k95rfs1xqtvn3jffsh9lx2jg668oosqdsde011a2dlsu57cb36aclszkx9qrmmkmi1j12zlx3yv5hbjh81ievq0ajitf1w0mnj6rws7odeu8nyh77hszcyr4o0gxqczmz6247pm6yd2nlplp488l0r8ufckc5gnhmcz7y66efiaq6chln4kzhir59mse3q8jlwdil2c6skfvjqrlgw3183ggrq0i1isjreuu3pc0aouhme7em3nxlclr9nlg94aiw89hcg1q2or3o5x3getbqigj81z31qtb83nveee9pddlbig6km1kb21v3jxhh6a5y6c92qaq5w455g4zkdoqu30k1gi23okpvquqmpzadtxhc4bfvr0yiziz4xrhgwal51vjrpl5p3hu8cbu4ib3pns3ts1deup29kgr0coki2yktjgc3stevg3cby8u2kvkowxl7cx1111lm == \d\n\s\r\8\k\1\w\5\o\9\1\x\8\x\z\a\x\u\y\8\f\l\j\d\6\a\0\l\u\l\l\a\u\0\j\f\i\z\7\n\f\e\8\f\w\9\n\v\z\a\g\y\4\i\s\1\q\q\9\o\k\n\b\2\l\b\k\b\s\i\l\h\g\w\l\i\1\n\q\a\k\v\k\w\d\b\i\v\u\m\j\q\r\k\s\n\2\b\c\t\y\x\e\q\6\l\n\p\i\8\s\o\j\z\h\3\7\n\b\y\o\t\b\h\u\5\f\k\9\4\v\g\8\h\0\h\8\q\u\y\a\w\2\e\5\9\q\8\i\s\r\f\g\x\f\e\e\1\o\1\9\h\o\1\n\t\t\v\2\u\k\q\5\b\b\q\y\g\d\z\p\7\c\d\m\z\c\9\l\k\y\k\w\k\s\d\c\t\5\m\8\e\a\5\k\o\8\a\o\1\z\7\k\8\b\a\v\u\c\w\y\l\j\l\e\w\o\6\x\r\k\2\7\v\c\y\v\f\y\l\6\9\z\c\y\t\0\f\f\w\5\n\4\u\d\1\7\4\d\x\o\c\2\t\p\9\g\i\x\r\z\z\1\d\o\2\o\d\l\2\7\f\s\r\9\k\9\l\3\x\a\w\3\3\6\8\x\z\j\m\j\b\1\a\8\c\b\b\e\f\r\z\i\9\e\7\4\7\2\j\n\8\b\p\q\g\z\9\z\4\w\z\1\0\i\k\w\2\6\a\j\2\5\k\z\7\1\9\o\1\8\z\o\8\w\k\x\v\b\s\i\4\j\l\o\8\9\h\8\6\2\o\t\0\j\8\3\d\w\d\9\i\p\k\4\1\j\1\q\e\o\0\j\t\r\e\u\1\k\k\5\u\3\c\d\d\o\1\r\t\s\6\u\w\d\i\4\8\n\f\u\3\2\f\f\u\0\x\s\r\m\h\g\t\y\k\k\a\3\q\p\e\z\d\x\e\3\n\x\2\7\e\i\r\2\p\s\f\u\f\c\d\b\z\v\v\t\q\8\o\l\z\n\d\o\w\w\4\5\q\k\y\e\j\i\i\7\u\g\j\o\r\g\u\f\w\8\g\m\b\r\h\m\j\z\6\c\g\7\y\g\s\e\f\2\k\2\i\3\3\a\o\c\8\7\o\n\e\y\o\1\j\j\g\l\7\j\q\t\w\6\n\k\s\7\y\4\b\g\c\9\3\g\f\p\h\e\w\d\a\q\0\k\9\5\r\f\s\1\x\q\t\v\n\3\j\f\f\s\h\9\l\x\2\j\g\6\6\8\o\o\s\q\d\s\d\e\0\1\1\a\2\d\l\s\u\5\7\c\b\3\6\a\c\l\s\z\k\x\9\q\r\m\m\k\m\i\1\j\1\2\z\l\x\3\y\v\5\h\b\j\h\8\1\i\e\v\q\0\a\j\i\t\f\1\w\0\m\n\j\6\r\w\s\7\o\d\e\u\8\n\y\h\7\7\h\s\z\c\y\r\4\o\0\g\x\q\c\z\m\z\6\2\4\7\p\m\6\y\d\2\n\l\p\l\p\4\8\8\l\0\r\8\u\f\c\k\c\5\g\n\h\m\c\z\7\y\6\6\e\f\i\a\q\6\c\h\l\n\4\k\z\h\i\r\5\9\m\s\e\3\q\8\j\l\w\d\i\l\2\c\6\s\k\f\v\j\q\r\l\g\w\3\1\8\3\g\g\r\q\0\i\1\i\s\j\r\e\u\u\3\p\c\0\a\o\u\h\m\e\7\e\m\3\n\x\l\c\l\r\9\n\l\g\9\4\a\i\w\8\9\h\c\g\1\q\2\o\r\3\o\5\x\3\g\e\t\b\q\i\g\j\8\1\z\3\1\q\t\b\8\3\n\v\e\e\e\9\p\d\d\l\b\i\g\6\k\m\1\k\b\2\1\v\3\j\x\h\h\6\a\5\y\6\c\9\2\q\a\q\5\w\4\5\5\g\4\z\k\d\o\q\u\3\0\k\1\g\i\2\3\o\k\p\v\q\u\q\m\p\z\a\d\t\x\h\c\4\b\f\v\r\0\y\i\z\i\z\4\x\r\h\g\w\a\l\5\1\v\j\r\p\l\5\p\3\h\u\8\c\b\u\4\i\b\3\p\n\s\3\t\s\1\d\e\u\p\2\9\k\g\r\0\c\o\k\i\2\y\k\t\j\g\c\3\s\t\e\v\g\3\c\b\y\8\u\2\k\v\k\o\w\x\l\7\c\x\1\1\1\1\l\m ]] 00:16:12.056 19:18:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:16:12.057 19:18:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ dnsr8k1w5o91x8xzaxuy8fljd6a0lullau0jfiz7nfe8fw9nvzagy4is1qq9oknb2lbkbsilhgwli1nqakvkwdbivumjqrksn2bctyxeq6lnpi8sojzh37nbyotbhu5fk94vg8h0h8quyaw2e59q8isrfgxfee1o19ho1nttv2ukq5bbqygdzp7cdmzc9lkykwksdct5m8ea5ko8ao1z7k8bavucwyljlewo6xrk27vcyvfyl69zcyt0ffw5n4ud174dxoc2tp9gixrzz1do2odl27fsr9k9l3xaw3368xzjmjb1a8cbbefrzi9e7472jn8bpqgz9z4wz10ikw26aj25kz719o18zo8wkxvbsi4jlo89h862ot0j83dwd9ipk41j1qeo0jtreu1kk5u3cddo1rts6uwdi48nfu32ffu0xsrmhgtykka3qpezdxe3nx27eir2psfufcdbzvvtq8olzndoww45qkyejii7ugjorgufw8gmbrhmjz6cg7ygsef2k2i33aoc87oneyo1jjgl7jqtw6nks7y4bgc93gfphewdaq0k95rfs1xqtvn3jffsh9lx2jg668oosqdsde011a2dlsu57cb36aclszkx9qrmmkmi1j12zlx3yv5hbjh81ievq0ajitf1w0mnj6rws7odeu8nyh77hszcyr4o0gxqczmz6247pm6yd2nlplp488l0r8ufckc5gnhmcz7y66efiaq6chln4kzhir59mse3q8jlwdil2c6skfvjqrlgw3183ggrq0i1isjreuu3pc0aouhme7em3nxlclr9nlg94aiw89hcg1q2or3o5x3getbqigj81z31qtb83nveee9pddlbig6km1kb21v3jxhh6a5y6c92qaq5w455g4zkdoqu30k1gi23okpvquqmpzadtxhc4bfvr0yiziz4xrhgwal51vjrpl5p3hu8cbu4ib3pns3ts1deup29kgr0coki2yktjgc3stevg3cby8u2kvkowxl7cx1111lm == \d\n\s\r\8\k\1\w\5\o\9\1\x\8\x\z\a\x\u\y\8\f\l\j\d\6\a\0\l\u\l\l\a\u\0\j\f\i\z\7\n\f\e\8\f\w\9\n\v\z\a\g\y\4\i\s\1\q\q\9\o\k\n\b\2\l\b\k\b\s\i\l\h\g\w\l\i\1\n\q\a\k\v\k\w\d\b\i\v\u\m\j\q\r\k\s\n\2\b\c\t\y\x\e\q\6\l\n\p\i\8\s\o\j\z\h\3\7\n\b\y\o\t\b\h\u\5\f\k\9\4\v\g\8\h\0\h\8\q\u\y\a\w\2\e\5\9\q\8\i\s\r\f\g\x\f\e\e\1\o\1\9\h\o\1\n\t\t\v\2\u\k\q\5\b\b\q\y\g\d\z\p\7\c\d\m\z\c\9\l\k\y\k\w\k\s\d\c\t\5\m\8\e\a\5\k\o\8\a\o\1\z\7\k\8\b\a\v\u\c\w\y\l\j\l\e\w\o\6\x\r\k\2\7\v\c\y\v\f\y\l\6\9\z\c\y\t\0\f\f\w\5\n\4\u\d\1\7\4\d\x\o\c\2\t\p\9\g\i\x\r\z\z\1\d\o\2\o\d\l\2\7\f\s\r\9\k\9\l\3\x\a\w\3\3\6\8\x\z\j\m\j\b\1\a\8\c\b\b\e\f\r\z\i\9\e\7\4\7\2\j\n\8\b\p\q\g\z\9\z\4\w\z\1\0\i\k\w\2\6\a\j\2\5\k\z\7\1\9\o\1\8\z\o\8\w\k\x\v\b\s\i\4\j\l\o\8\9\h\8\6\2\o\t\0\j\8\3\d\w\d\9\i\p\k\4\1\j\1\q\e\o\0\j\t\r\e\u\1\k\k\5\u\3\c\d\d\o\1\r\t\s\6\u\w\d\i\4\8\n\f\u\3\2\f\f\u\0\x\s\r\m\h\g\t\y\k\k\a\3\q\p\e\z\d\x\e\3\n\x\2\7\e\i\r\2\p\s\f\u\f\c\d\b\z\v\v\t\q\8\o\l\z\n\d\o\w\w\4\5\q\k\y\e\j\i\i\7\u\g\j\o\r\g\u\f\w\8\g\m\b\r\h\m\j\z\6\c\g\7\y\g\s\e\f\2\k\2\i\3\3\a\o\c\8\7\o\n\e\y\o\1\j\j\g\l\7\j\q\t\w\6\n\k\s\7\y\4\b\g\c\9\3\g\f\p\h\e\w\d\a\q\0\k\9\5\r\f\s\1\x\q\t\v\n\3\j\f\f\s\h\9\l\x\2\j\g\6\6\8\o\o\s\q\d\s\d\e\0\1\1\a\2\d\l\s\u\5\7\c\b\3\6\a\c\l\s\z\k\x\9\q\r\m\m\k\m\i\1\j\1\2\z\l\x\3\y\v\5\h\b\j\h\8\1\i\e\v\q\0\a\j\i\t\f\1\w\0\m\n\j\6\r\w\s\7\o\d\e\u\8\n\y\h\7\7\h\s\z\c\y\r\4\o\0\g\x\q\c\z\m\z\6\2\4\7\p\m\6\y\d\2\n\l\p\l\p\4\8\8\l\0\r\8\u\f\c\k\c\5\g\n\h\m\c\z\7\y\6\6\e\f\i\a\q\6\c\h\l\n\4\k\z\h\i\r\5\9\m\s\e\3\q\8\j\l\w\d\i\l\2\c\6\s\k\f\v\j\q\r\l\g\w\3\1\8\3\g\g\r\q\0\i\1\i\s\j\r\e\u\u\3\p\c\0\a\o\u\h\m\e\7\e\m\3\n\x\l\c\l\r\9\n\l\g\9\4\a\i\w\8\9\h\c\g\1\q\2\o\r\3\o\5\x\3\g\e\t\b\q\i\g\j\8\1\z\3\1\q\t\b\8\3\n\v\e\e\e\9\p\d\d\l\b\i\g\6\k\m\1\k\b\2\1\v\3\j\x\h\h\6\a\5\y\6\c\9\2\q\a\q\5\w\4\5\5\g\4\z\k\d\o\q\u\3\0\k\1\g\i\2\3\o\k\p\v\q\u\q\m\p\z\a\d\t\x\h\c\4\b\f\v\r\0\y\i\z\i\z\4\x\r\h\g\w\a\l\5\1\v\j\r\p\l\5\p\3\h\u\8\c\b\u\4\i\b\3\p\n\s\3\t\s\1\d\e\u\p\2\9\k\g\r\0\c\o\k\i\2\y\k\t\j\g\c\3\s\t\e\v\g\3\c\b\y\8\u\2\k\v\k\o\w\x\l\7\c\x\1\1\1\1\l\m ]] 00:16:12.057 19:18:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:16:12.625 19:18:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:16:12.625 19:18:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:16:12.625 19:18:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:16:12.625 19:18:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:16:12.625 { 00:16:12.625 "subsystems": [ 00:16:12.625 { 00:16:12.625 "subsystem": "bdev", 00:16:12.625 "config": [ 00:16:12.625 { 00:16:12.625 "params": { 00:16:12.625 "block_size": 512, 00:16:12.625 "num_blocks": 1048576, 00:16:12.625 "name": "malloc0" 00:16:12.625 }, 00:16:12.625 "method": "bdev_malloc_create" 00:16:12.625 }, 00:16:12.625 { 00:16:12.625 "params": { 00:16:12.625 "filename": "/dev/zram1", 00:16:12.625 "name": "uring0" 00:16:12.625 }, 00:16:12.625 "method": "bdev_uring_create" 00:16:12.625 }, 00:16:12.625 { 00:16:12.625 "method": "bdev_wait_for_examine" 00:16:12.625 } 00:16:12.625 ] 00:16:12.625 } 00:16:12.625 ] 00:16:12.625 } 00:16:12.625 [2024-10-17 19:18:21.792756] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:16:12.625 [2024-10-17 19:18:21.792887] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61436 ] 00:16:12.884 [2024-10-17 19:18:21.932004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.884 [2024-10-17 19:18:21.999119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.884 [2024-10-17 19:18:22.056267] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:14.294  [2024-10-17T19:18:24.488Z] Copying: 142/512 [MB] (142 MBps) [2024-10-17T19:18:25.436Z] Copying: 287/512 [MB] (145 MBps) [2024-10-17T19:18:26.000Z] Copying: 431/512 [MB] (143 MBps) [2024-10-17T19:18:26.258Z] Copying: 512/512 [MB] (average 143 MBps) 00:16:17.000 00:16:17.000 19:18:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:16:17.000 19:18:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:16:17.000 19:18:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:16:17.000 19:18:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:16:17.000 19:18:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:16:17.000 19:18:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:16:17.000 19:18:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:16:17.000 19:18:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:16:17.258 [2024-10-17 19:18:26.271552] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:16:17.258 [2024-10-17 19:18:26.271672] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61502 ] 00:16:17.258 { 00:16:17.258 "subsystems": [ 00:16:17.258 { 00:16:17.258 "subsystem": "bdev", 00:16:17.258 "config": [ 00:16:17.258 { 00:16:17.258 "params": { 00:16:17.258 "block_size": 512, 00:16:17.258 "num_blocks": 1048576, 00:16:17.258 "name": "malloc0" 00:16:17.258 }, 00:16:17.258 "method": "bdev_malloc_create" 00:16:17.258 }, 00:16:17.258 { 00:16:17.258 "params": { 00:16:17.258 "filename": "/dev/zram1", 00:16:17.258 "name": "uring0" 00:16:17.258 }, 00:16:17.258 "method": "bdev_uring_create" 00:16:17.258 }, 00:16:17.258 { 00:16:17.258 "params": { 00:16:17.258 "name": "uring0" 00:16:17.258 }, 00:16:17.258 "method": "bdev_uring_delete" 00:16:17.258 }, 00:16:17.258 { 00:16:17.258 "method": "bdev_wait_for_examine" 00:16:17.258 } 00:16:17.258 ] 00:16:17.258 } 00:16:17.258 ] 00:16:17.258 } 00:16:17.258 [2024-10-17 19:18:26.406960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.258 [2024-10-17 19:18:26.473473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.516 [2024-10-17 19:18:26.527494] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:17.516  [2024-10-17T19:18:27.344Z] Copying: 0/0 [B] (average 0 Bps) 00:16:18.086 00:16:18.086 19:18:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:16:18.086 19:18:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:16:18.086 19:18:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:16:18.086 19:18:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:18.086 19:18:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:16:18.086 19:18:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:16:18.086 19:18:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:16:18.086 19:18:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:16:18.086 19:18:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:18.086 19:18:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:18.086 19:18:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:18.086 19:18:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:18.086 19:18:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:18.086 19:18:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:18.086 19:18:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:18.086 19:18:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:16:18.086 [2024-10-17 19:18:27.167843] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:16:18.086 [2024-10-17 19:18:27.167967] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61525 ] 00:16:18.086 { 00:16:18.086 "subsystems": [ 00:16:18.086 { 00:16:18.086 "subsystem": "bdev", 00:16:18.086 "config": [ 00:16:18.086 { 00:16:18.086 "params": { 00:16:18.086 "block_size": 512, 00:16:18.087 "num_blocks": 1048576, 00:16:18.087 "name": "malloc0" 00:16:18.087 }, 00:16:18.087 "method": "bdev_malloc_create" 00:16:18.087 }, 00:16:18.087 { 00:16:18.087 "params": { 00:16:18.087 "filename": "/dev/zram1", 00:16:18.087 "name": "uring0" 00:16:18.087 }, 00:16:18.087 "method": "bdev_uring_create" 00:16:18.087 }, 00:16:18.087 { 00:16:18.087 "params": { 00:16:18.087 "name": "uring0" 00:16:18.087 }, 00:16:18.087 "method": "bdev_uring_delete" 00:16:18.087 }, 00:16:18.087 { 00:16:18.087 "method": "bdev_wait_for_examine" 00:16:18.087 } 00:16:18.087 ] 00:16:18.087 } 00:16:18.087 ] 00:16:18.087 } 00:16:18.087 [2024-10-17 19:18:27.306336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.354 [2024-10-17 19:18:27.372392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.354 [2024-10-17 19:18:27.426276] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:18.612 [2024-10-17 19:18:27.629258] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:16:18.612 [2024-10-17 19:18:27.629334] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:16:18.612 [2024-10-17 19:18:27.629348] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:16:18.612 [2024-10-17 19:18:27.629359] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:18.869 [2024-10-17 19:18:27.939581] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:16:18.869 19:18:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:16:18.869 19:18:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:18.869 19:18:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:16:18.869 19:18:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:16:18.869 19:18:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:16:18.869 19:18:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:18.869 19:18:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:16:18.869 19:18:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:16:18.869 19:18:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:16:18.869 19:18:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:16:18.869 19:18:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:16:18.869 19:18:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:16:19.127 00:16:19.127 real 0m16.439s 00:16:19.127 user 0m11.003s 00:16:19.127 sys 0m14.123s 00:16:19.127 19:18:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:19.127 19:18:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:16:19.127 ************************************ 00:16:19.127 END TEST dd_uring_copy 00:16:19.127 ************************************ 00:16:19.127 ************************************ 00:16:19.127 END TEST spdk_dd_uring 00:16:19.127 ************************************ 00:16:19.127 00:16:19.127 real 0m16.662s 00:16:19.127 user 0m11.141s 00:16:19.127 sys 0m14.215s 00:16:19.127 19:18:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:19.127 19:18:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:16:19.127 19:18:28 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:16:19.127 19:18:28 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:19.127 19:18:28 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:19.127 19:18:28 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:16:19.127 ************************************ 00:16:19.127 START TEST spdk_dd_sparse 00:16:19.127 ************************************ 00:16:19.127 19:18:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:16:19.385 * Looking for test storage... 00:16:19.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # lcov --version 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:19.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.385 --rc genhtml_branch_coverage=1 00:16:19.385 --rc genhtml_function_coverage=1 00:16:19.385 --rc genhtml_legend=1 00:16:19.385 --rc geninfo_all_blocks=1 00:16:19.385 --rc geninfo_unexecuted_blocks=1 00:16:19.385 00:16:19.385 ' 00:16:19.385 19:18:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:19.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.386 --rc genhtml_branch_coverage=1 00:16:19.386 --rc genhtml_function_coverage=1 00:16:19.386 --rc genhtml_legend=1 00:16:19.386 --rc geninfo_all_blocks=1 00:16:19.386 --rc geninfo_unexecuted_blocks=1 00:16:19.386 00:16:19.386 ' 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:19.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.386 --rc genhtml_branch_coverage=1 00:16:19.386 --rc genhtml_function_coverage=1 00:16:19.386 --rc genhtml_legend=1 00:16:19.386 --rc geninfo_all_blocks=1 00:16:19.386 --rc geninfo_unexecuted_blocks=1 00:16:19.386 00:16:19.386 ' 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:19.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.386 --rc genhtml_branch_coverage=1 00:16:19.386 --rc genhtml_function_coverage=1 00:16:19.386 --rc genhtml_legend=1 00:16:19.386 --rc geninfo_all_blocks=1 00:16:19.386 --rc geninfo_unexecuted_blocks=1 00:16:19.386 00:16:19.386 ' 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:16:19.386 1+0 records in 00:16:19.386 1+0 records out 00:16:19.386 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00715087 s, 587 MB/s 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:16:19.386 1+0 records in 00:16:19.386 1+0 records out 00:16:19.386 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00864507 s, 485 MB/s 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:16:19.386 1+0 records in 00:16:19.386 1+0 records out 00:16:19.386 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00833742 s, 503 MB/s 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:16:19.386 ************************************ 00:16:19.386 START TEST dd_sparse_file_to_file 00:16:19.386 ************************************ 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:16:19.386 19:18:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:16:19.645 { 00:16:19.645 "subsystems": [ 00:16:19.645 { 00:16:19.645 "subsystem": "bdev", 00:16:19.645 "config": [ 00:16:19.645 { 00:16:19.645 "params": { 00:16:19.645 "block_size": 4096, 00:16:19.645 "filename": "dd_sparse_aio_disk", 00:16:19.645 "name": "dd_aio" 00:16:19.645 }, 00:16:19.645 "method": "bdev_aio_create" 00:16:19.645 }, 00:16:19.645 { 00:16:19.645 "params": { 00:16:19.645 "lvs_name": "dd_lvstore", 00:16:19.645 "bdev_name": "dd_aio" 00:16:19.645 }, 00:16:19.645 "method": "bdev_lvol_create_lvstore" 00:16:19.645 }, 00:16:19.645 { 00:16:19.645 "method": "bdev_wait_for_examine" 00:16:19.645 } 00:16:19.645 ] 00:16:19.645 } 00:16:19.645 ] 00:16:19.645 } 00:16:19.645 [2024-10-17 19:18:28.672309] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:16:19.645 [2024-10-17 19:18:28.672570] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61625 ] 00:16:19.645 [2024-10-17 19:18:28.811770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.645 [2024-10-17 19:18:28.886668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.903 [2024-10-17 19:18:28.946517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:19.903  [2024-10-17T19:18:29.420Z] Copying: 12/36 [MB] (average 923 MBps) 00:16:20.162 00:16:20.162 19:18:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:16:20.162 19:18:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:16:20.162 19:18:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:16:20.162 19:18:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:16:20.162 19:18:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:16:20.162 19:18:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:16:20.162 19:18:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:16:20.162 19:18:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:16:20.162 ************************************ 00:16:20.162 END TEST dd_sparse_file_to_file 00:16:20.162 ************************************ 00:16:20.162 19:18:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:16:20.162 19:18:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:16:20.162 00:16:20.162 real 0m0.702s 00:16:20.162 user 0m0.424s 00:16:20.162 sys 0m0.367s 00:16:20.162 19:18:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:20.162 19:18:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:16:20.162 19:18:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:16:20.162 19:18:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:20.162 19:18:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:20.162 19:18:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:16:20.162 ************************************ 00:16:20.162 START TEST dd_sparse_file_to_bdev 00:16:20.162 ************************************ 00:16:20.162 19:18:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:16:20.162 19:18:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:16:20.162 19:18:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:16:20.162 19:18:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:16:20.162 19:18:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:16:20.162 19:18:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:16:20.162 19:18:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:16:20.162 19:18:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:16:20.162 19:18:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:16:20.420 [2024-10-17 19:18:29.420654] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:16:20.420 [2024-10-17 19:18:29.421263] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61673 ] 00:16:20.420 { 00:16:20.420 "subsystems": [ 00:16:20.420 { 00:16:20.420 "subsystem": "bdev", 00:16:20.420 "config": [ 00:16:20.420 { 00:16:20.420 "params": { 00:16:20.420 "block_size": 4096, 00:16:20.420 "filename": "dd_sparse_aio_disk", 00:16:20.420 "name": "dd_aio" 00:16:20.420 }, 00:16:20.420 "method": "bdev_aio_create" 00:16:20.420 }, 00:16:20.420 { 00:16:20.420 "params": { 00:16:20.420 "lvs_name": "dd_lvstore", 00:16:20.420 "lvol_name": "dd_lvol", 00:16:20.421 "size_in_mib": 36, 00:16:20.421 "thin_provision": true 00:16:20.421 }, 00:16:20.421 "method": "bdev_lvol_create" 00:16:20.421 }, 00:16:20.421 { 00:16:20.421 "method": "bdev_wait_for_examine" 00:16:20.421 } 00:16:20.421 ] 00:16:20.421 } 00:16:20.421 ] 00:16:20.421 } 00:16:20.421 [2024-10-17 19:18:29.555624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.421 [2024-10-17 19:18:29.622223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.679 [2024-10-17 19:18:29.677495] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:20.679  [2024-10-17T19:18:30.195Z] Copying: 12/36 [MB] (average 545 MBps) 00:16:20.937 00:16:20.937 00:16:20.937 real 0m0.618s 00:16:20.937 user 0m0.384s 00:16:20.937 sys 0m0.335s 00:16:20.937 19:18:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:20.937 ************************************ 00:16:20.937 19:18:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:16:20.937 END TEST dd_sparse_file_to_bdev 00:16:20.937 ************************************ 00:16:20.937 19:18:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:16:20.937 19:18:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:20.937 19:18:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:20.937 19:18:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:16:20.937 ************************************ 00:16:20.937 START TEST dd_sparse_bdev_to_file 00:16:20.937 ************************************ 00:16:20.937 19:18:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:16:20.937 19:18:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:16:20.937 19:18:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:16:20.937 19:18:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:16:20.937 19:18:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:16:20.937 19:18:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:16:20.937 19:18:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:16:20.937 19:18:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:16:20.937 19:18:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:16:20.937 [2024-10-17 19:18:30.089860] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:16:20.937 [2024-10-17 19:18:30.090171] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61700 ] 00:16:20.937 { 00:16:20.937 "subsystems": [ 00:16:20.937 { 00:16:20.937 "subsystem": "bdev", 00:16:20.937 "config": [ 00:16:20.937 { 00:16:20.937 "params": { 00:16:20.937 "block_size": 4096, 00:16:20.937 "filename": "dd_sparse_aio_disk", 00:16:20.937 "name": "dd_aio" 00:16:20.937 }, 00:16:20.937 "method": "bdev_aio_create" 00:16:20.937 }, 00:16:20.937 { 00:16:20.937 "method": "bdev_wait_for_examine" 00:16:20.937 } 00:16:20.937 ] 00:16:20.937 } 00:16:20.937 ] 00:16:20.937 } 00:16:21.195 [2024-10-17 19:18:30.223978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.195 [2024-10-17 19:18:30.298823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.195 [2024-10-17 19:18:30.358352] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:21.195  [2024-10-17T19:18:30.711Z] Copying: 12/36 [MB] (average 923 MBps) 00:16:21.453 00:16:21.453 19:18:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:16:21.453 19:18:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:16:21.453 19:18:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:16:21.453 19:18:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:16:21.453 19:18:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:16:21.453 19:18:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:16:21.453 19:18:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:16:21.453 19:18:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:16:21.453 ************************************ 00:16:21.453 END TEST dd_sparse_bdev_to_file 00:16:21.453 ************************************ 00:16:21.453 19:18:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:16:21.453 19:18:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:16:21.453 00:16:21.453 real 0m0.671s 00:16:21.453 user 0m0.425s 00:16:21.453 sys 0m0.347s 00:16:21.453 19:18:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:21.453 19:18:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:16:21.712 19:18:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:16:21.712 19:18:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:16:21.712 19:18:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:16:21.712 19:18:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:16:21.712 19:18:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:16:21.712 ************************************ 00:16:21.712 END TEST spdk_dd_sparse 00:16:21.712 ************************************ 00:16:21.712 00:16:21.712 real 0m2.393s 00:16:21.712 user 0m1.402s 00:16:21.712 sys 0m1.271s 00:16:21.712 19:18:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:21.712 19:18:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:16:21.712 19:18:30 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:16:21.712 19:18:30 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:21.712 19:18:30 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:21.712 19:18:30 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:16:21.712 ************************************ 00:16:21.712 START TEST spdk_dd_negative 00:16:21.712 ************************************ 00:16:21.712 19:18:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:16:21.712 * Looking for test storage... 00:16:21.712 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:16:21.712 19:18:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:21.712 19:18:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # lcov --version 00:16:21.712 19:18:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:21.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.971 --rc genhtml_branch_coverage=1 00:16:21.971 --rc genhtml_function_coverage=1 00:16:21.971 --rc genhtml_legend=1 00:16:21.971 --rc geninfo_all_blocks=1 00:16:21.971 --rc geninfo_unexecuted_blocks=1 00:16:21.971 00:16:21.971 ' 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:21.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.971 --rc genhtml_branch_coverage=1 00:16:21.971 --rc genhtml_function_coverage=1 00:16:21.971 --rc genhtml_legend=1 00:16:21.971 --rc geninfo_all_blocks=1 00:16:21.971 --rc geninfo_unexecuted_blocks=1 00:16:21.971 00:16:21.971 ' 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:21.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.971 --rc genhtml_branch_coverage=1 00:16:21.971 --rc genhtml_function_coverage=1 00:16:21.971 --rc genhtml_legend=1 00:16:21.971 --rc geninfo_all_blocks=1 00:16:21.971 --rc geninfo_unexecuted_blocks=1 00:16:21.971 00:16:21.971 ' 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:21.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.971 --rc genhtml_branch_coverage=1 00:16:21.971 --rc genhtml_function_coverage=1 00:16:21.971 --rc genhtml_legend=1 00:16:21.971 --rc geninfo_all_blocks=1 00:16:21.971 --rc geninfo_unexecuted_blocks=1 00:16:21.971 00:16:21.971 ' 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:21.971 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:16:21.972 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.972 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.972 19:18:31 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.972 19:18:31 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.972 19:18:31 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.972 19:18:31 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.972 19:18:31 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:16:21.972 19:18:31 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.972 19:18:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:16:21.972 19:18:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:16:21.972 19:18:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:16:21.972 19:18:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:16:21.972 19:18:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:16:21.972 19:18:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:21.972 19:18:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:21.972 19:18:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:16:21.972 ************************************ 00:16:21.972 START TEST dd_invalid_arguments 00:16:21.972 ************************************ 00:16:21.972 19:18:31 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:16:21.972 19:18:31 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:16:21.972 19:18:31 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:16:21.972 19:18:31 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:16:21.972 19:18:31 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:21.972 19:18:31 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.972 19:18:31 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:21.972 19:18:31 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.972 19:18:31 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:21.972 19:18:31 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.972 19:18:31 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:21.972 19:18:31 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:21.972 19:18:31 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:16:21.972 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:16:21.972 00:16:21.972 CPU options: 00:16:21.972 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:16:21.972 (like [0,1,10]) 00:16:21.972 --lcores lcore to CPU mapping list. The list is in the format: 00:16:21.972 [<,lcores[@CPUs]>...] 00:16:21.972 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:16:21.972 Within the group, '-' is used for range separator, 00:16:21.972 ',' is used for single number separator. 00:16:21.972 '( )' can be omitted for single element group, 00:16:21.972 '@' can be omitted if cpus and lcores have the same value 00:16:21.972 --disable-cpumask-locks Disable CPU core lock files. 00:16:21.972 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:16:21.972 pollers in the app support interrupt mode) 00:16:21.972 -p, --main-core main (primary) core for DPDK 00:16:21.972 00:16:21.972 Configuration options: 00:16:21.972 -c, --config, --json JSON config file 00:16:21.972 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:16:21.972 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:16:21.972 --wait-for-rpc wait for RPCs to initialize subsystems 00:16:21.972 --rpcs-allowed comma-separated list of permitted RPCS 00:16:21.972 --json-ignore-init-errors don't exit on invalid config entry 00:16:21.972 00:16:21.972 Memory options: 00:16:21.972 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:16:21.972 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:16:21.972 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:16:21.972 -R, --huge-unlink unlink huge files after initialization 00:16:21.972 -n, --mem-channels number of memory channels used for DPDK 00:16:21.972 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:16:21.972 --msg-mempool-size global message memory pool size in count (default: 262143) 00:16:21.972 --no-huge run without using hugepages 00:16:21.972 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:16:21.972 -i, --shm-id shared memory ID (optional) 00:16:21.972 -g, --single-file-segments force creating just one hugetlbfs file 00:16:21.972 00:16:21.972 PCI options: 00:16:21.972 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:16:21.972 -B, --pci-blocked pci addr to block (can be used more than once) 00:16:21.972 -u, --no-pci disable PCI access 00:16:21.972 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:16:21.972 00:16:21.972 Log options: 00:16:21.972 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:16:21.972 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:16:21.972 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:16:21.972 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:16:21.972 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:16:21.972 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:16:21.972 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:16:21.972 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:16:21.972 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:16:21.972 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:16:21.972 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:16:21.972 --silence-noticelog disable notice level logging to stderr 00:16:21.972 00:16:21.972 Trace options: 00:16:21.972 --num-trace-entries number of trace entries for each core, must be power of 2, 00:16:21.972 setting 0 to disable trace (default 32768) 00:16:21.972 Tracepoints vary in size and can use more than one trace entry. 00:16:21.972 -e, --tpoint-group [:] 00:16:21.972 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:16:21.972 [2024-10-17 19:18:31.135954] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:16:21.972 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:16:21.972 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:16:21.972 bdev_raid, scheduler, all). 00:16:21.972 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:16:21.972 a tracepoint group. First tpoint inside a group can be enabled by 00:16:21.972 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:16:21.972 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:16:21.972 in /include/spdk_internal/trace_defs.h 00:16:21.972 00:16:21.972 Other options: 00:16:21.972 -h, --help show this usage 00:16:21.972 -v, --version print SPDK version 00:16:21.972 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:16:21.972 --env-context Opaque context for use of the env implementation 00:16:21.972 00:16:21.973 Application specific: 00:16:21.973 [--------- DD Options ---------] 00:16:21.973 --if Input file. Must specify either --if or --ib. 00:16:21.973 --ib Input bdev. Must specifier either --if or --ib 00:16:21.973 --of Output file. Must specify either --of or --ob. 00:16:21.973 --ob Output bdev. Must specify either --of or --ob. 00:16:21.973 --iflag Input file flags. 00:16:21.973 --oflag Output file flags. 00:16:21.973 --bs I/O unit size (default: 4096) 00:16:21.973 --qd Queue depth (default: 2) 00:16:21.973 --count I/O unit count. The number of I/O units to copy. (default: all) 00:16:21.973 --skip Skip this many I/O units at start of input. (default: 0) 00:16:21.973 --seek Skip this many I/O units at start of output. (default: 0) 00:16:21.973 --aio Force usage of AIO. (by default io_uring is used if available) 00:16:21.973 --sparse Enable hole skipping in input target 00:16:21.973 Available iflag and oflag values: 00:16:21.973 append - append mode 00:16:21.973 direct - use direct I/O for data 00:16:21.973 directory - fail unless a directory 00:16:21.973 dsync - use synchronized I/O for data 00:16:21.973 noatime - do not update access time 00:16:21.973 noctty - do not assign controlling terminal from file 00:16:21.973 nofollow - do not follow symlinks 00:16:21.973 nonblock - use non-blocking I/O 00:16:21.973 sync - use synchronized I/O for data and metadata 00:16:21.973 ************************************ 00:16:21.973 END TEST dd_invalid_arguments 00:16:21.973 ************************************ 00:16:21.973 19:18:31 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:16:21.973 19:18:31 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:21.973 19:18:31 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:21.973 19:18:31 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:21.973 00:16:21.973 real 0m0.077s 00:16:21.973 user 0m0.049s 00:16:21.973 sys 0m0.027s 00:16:21.973 19:18:31 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:21.973 19:18:31 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:16:21.973 19:18:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:16:21.973 19:18:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:21.973 19:18:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:21.973 19:18:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:16:21.973 ************************************ 00:16:21.973 START TEST dd_double_input 00:16:21.973 ************************************ 00:16:21.973 19:18:31 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:16:21.973 19:18:31 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:16:21.973 19:18:31 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:16:21.973 19:18:31 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:16:21.973 19:18:31 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:21.973 19:18:31 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.973 19:18:31 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:21.973 19:18:31 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.973 19:18:31 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:21.973 19:18:31 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.973 19:18:31 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:21.973 19:18:31 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:21.973 19:18:31 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:16:22.232 [2024-10-17 19:18:31.265239] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:22.232 00:16:22.232 real 0m0.085s 00:16:22.232 user 0m0.051s 00:16:22.232 sys 0m0.031s 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:16:22.232 ************************************ 00:16:22.232 END TEST dd_double_input 00:16:22.232 ************************************ 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:16:22.232 ************************************ 00:16:22.232 START TEST dd_double_output 00:16:22.232 ************************************ 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:16:22.232 [2024-10-17 19:18:31.393757] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:22.232 00:16:22.232 real 0m0.083s 00:16:22.232 user 0m0.044s 00:16:22.232 sys 0m0.038s 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:22.232 ************************************ 00:16:22.232 END TEST dd_double_output 00:16:22.232 ************************************ 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:16:22.232 ************************************ 00:16:22.232 START TEST dd_no_input 00:16:22.232 ************************************ 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:22.232 19:18:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:16:22.491 [2024-10-17 19:18:31.527445] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:16:22.491 19:18:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:16:22.491 19:18:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:22.491 19:18:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:22.491 19:18:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:22.491 00:16:22.491 real 0m0.082s 00:16:22.491 user 0m0.054s 00:16:22.491 sys 0m0.027s 00:16:22.491 19:18:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:22.491 ************************************ 00:16:22.491 END TEST dd_no_input 00:16:22.491 ************************************ 00:16:22.491 19:18:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:16:22.491 19:18:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:16:22.491 19:18:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:22.491 19:18:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:22.491 19:18:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:16:22.491 ************************************ 00:16:22.491 START TEST dd_no_output 00:16:22.491 ************************************ 00:16:22.491 19:18:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:16:22.491 19:18:31 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:16:22.491 19:18:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:16:22.491 19:18:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:16:22.491 19:18:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.491 19:18:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.491 19:18:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.491 19:18:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.491 19:18:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.491 19:18:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.491 19:18:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.491 19:18:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:22.491 19:18:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:16:22.492 [2024-10-17 19:18:31.671955] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:16:22.492 19:18:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:16:22.492 19:18:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:22.492 19:18:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:22.492 19:18:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:22.492 00:16:22.492 real 0m0.102s 00:16:22.492 user 0m0.065s 00:16:22.492 sys 0m0.034s 00:16:22.492 19:18:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:22.492 19:18:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:16:22.492 ************************************ 00:16:22.492 END TEST dd_no_output 00:16:22.492 ************************************ 00:16:22.492 19:18:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:16:22.492 19:18:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:22.492 19:18:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:22.492 19:18:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:16:22.751 ************************************ 00:16:22.751 START TEST dd_wrong_blocksize 00:16:22.751 ************************************ 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:16:22.751 [2024-10-17 19:18:31.811453] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:22.751 00:16:22.751 real 0m0.080s 00:16:22.751 user 0m0.052s 00:16:22.751 sys 0m0.027s 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:16:22.751 ************************************ 00:16:22.751 END TEST dd_wrong_blocksize 00:16:22.751 ************************************ 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:16:22.751 ************************************ 00:16:22.751 START TEST dd_smaller_blocksize 00:16:22.751 ************************************ 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:22.751 19:18:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:16:22.751 [2024-10-17 19:18:31.943691] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:16:22.751 [2024-10-17 19:18:31.943836] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61932 ] 00:16:23.010 [2024-10-17 19:18:32.083585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.010 [2024-10-17 19:18:32.151584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.010 [2024-10-17 19:18:32.204945] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:23.576 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:16:23.834 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:16:23.834 [2024-10-17 19:18:32.871710] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:16:23.834 [2024-10-17 19:18:32.871858] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:23.834 [2024-10-17 19:18:32.989536] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:16:23.834 19:18:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:16:23.834 19:18:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:23.834 19:18:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:16:23.834 19:18:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:16:23.834 19:18:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:16:23.834 19:18:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:23.834 00:16:23.834 real 0m1.177s 00:16:23.834 user 0m0.414s 00:16:23.834 sys 0m0.655s 00:16:23.834 19:18:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:23.834 ************************************ 00:16:23.834 END TEST dd_smaller_blocksize 00:16:23.834 ************************************ 00:16:23.834 19:18:33 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:16:24.093 ************************************ 00:16:24.093 START TEST dd_invalid_count 00:16:24.093 ************************************ 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:16:24.093 [2024-10-17 19:18:33.166429] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:24.093 00:16:24.093 real 0m0.069s 00:16:24.093 user 0m0.043s 00:16:24.093 sys 0m0.026s 00:16:24.093 ************************************ 00:16:24.093 END TEST dd_invalid_count 00:16:24.093 ************************************ 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:16:24.093 ************************************ 00:16:24.093 START TEST dd_invalid_oflag 00:16:24.093 ************************************ 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:16:24.093 [2024-10-17 19:18:33.285781] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:24.093 00:16:24.093 real 0m0.070s 00:16:24.093 user 0m0.039s 00:16:24.093 sys 0m0.030s 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:16:24.093 ************************************ 00:16:24.093 END TEST dd_invalid_oflag 00:16:24.093 ************************************ 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:24.093 19:18:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:16:24.384 ************************************ 00:16:24.384 START TEST dd_invalid_iflag 00:16:24.384 ************************************ 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:16:24.384 [2024-10-17 19:18:33.407387] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:24.384 00:16:24.384 real 0m0.072s 00:16:24.384 user 0m0.042s 00:16:24.384 sys 0m0.029s 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:24.384 ************************************ 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:16:24.384 END TEST dd_invalid_iflag 00:16:24.384 ************************************ 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:16:24.384 ************************************ 00:16:24.384 START TEST dd_unknown_flag 00:16:24.384 ************************************ 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:24.384 19:18:33 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:16:24.384 [2024-10-17 19:18:33.535329] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:16:24.384 [2024-10-17 19:18:33.535453] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62035 ] 00:16:24.642 [2024-10-17 19:18:33.677959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.642 [2024-10-17 19:18:33.744371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.642 [2024-10-17 19:18:33.797744] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:24.642 [2024-10-17 19:18:33.833716] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:16:24.642 [2024-10-17 19:18:33.833799] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:24.642 [2024-10-17 19:18:33.833866] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:16:24.642 [2024-10-17 19:18:33.833880] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:24.642 [2024-10-17 19:18:33.834154] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:16:24.642 [2024-10-17 19:18:33.834173] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:24.642 [2024-10-17 19:18:33.834238] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:16:24.642 [2024-10-17 19:18:33.834249] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:16:24.900 [2024-10-17 19:18:33.950040] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:16:24.900 19:18:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:16:24.900 19:18:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:24.900 19:18:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:16:24.900 19:18:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:16:24.900 19:18:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:16:24.900 19:18:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:24.900 00:16:24.900 real 0m0.547s 00:16:24.900 user 0m0.303s 00:16:24.900 sys 0m0.145s 00:16:24.900 19:18:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:24.900 ************************************ 00:16:24.900 19:18:34 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:16:24.900 END TEST dd_unknown_flag 00:16:24.900 ************************************ 00:16:24.900 19:18:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:16:24.900 19:18:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:24.900 19:18:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:24.900 19:18:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:16:24.900 ************************************ 00:16:24.900 START TEST dd_invalid_json 00:16:24.900 ************************************ 00:16:24.900 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:16:24.900 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:16:24.900 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:16:24.900 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:16:24.900 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.900 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:16:24.900 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.901 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.901 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.901 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.901 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.901 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.901 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:24.901 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:16:24.901 [2024-10-17 19:18:34.137749] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:16:24.901 [2024-10-17 19:18:34.137871] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62058 ] 00:16:25.159 [2024-10-17 19:18:34.276983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.159 [2024-10-17 19:18:34.344511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.159 [2024-10-17 19:18:34.344595] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:16:25.159 [2024-10-17 19:18:34.344615] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:25.159 [2024-10-17 19:18:34.344626] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:25.159 [2024-10-17 19:18:34.344670] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:16:25.418 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:16:25.418 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:25.418 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:16:25.418 ************************************ 00:16:25.418 END TEST dd_invalid_json 00:16:25.418 ************************************ 00:16:25.418 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:16:25.418 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:16:25.418 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:25.418 00:16:25.418 real 0m0.361s 00:16:25.418 user 0m0.189s 00:16:25.418 sys 0m0.070s 00:16:25.418 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:25.418 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:16:25.418 19:18:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:16:25.418 19:18:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:25.418 19:18:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:25.418 19:18:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:16:25.418 ************************************ 00:16:25.418 START TEST dd_invalid_seek 00:16:25.418 ************************************ 00:16:25.418 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1125 -- # invalid_seek 00:16:25.418 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:16:25.418 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:16:25.418 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:16:25.418 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:16:25.419 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:16:25.419 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:16:25.419 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:16:25.419 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:16:25.419 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:16:25.419 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:16:25.419 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:25.419 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:16:25.419 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:16:25.419 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:25.419 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:25.419 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:25.419 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:25.419 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:25.419 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:25.419 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:25.419 19:18:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:16:25.419 [2024-10-17 19:18:34.547030] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:16:25.419 [2024-10-17 19:18:34.547149] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62093 ] 00:16:25.419 { 00:16:25.419 "subsystems": [ 00:16:25.419 { 00:16:25.419 "subsystem": "bdev", 00:16:25.419 "config": [ 00:16:25.419 { 00:16:25.419 "params": { 00:16:25.419 "block_size": 512, 00:16:25.419 "num_blocks": 512, 00:16:25.419 "name": "malloc0" 00:16:25.419 }, 00:16:25.419 "method": "bdev_malloc_create" 00:16:25.419 }, 00:16:25.419 { 00:16:25.419 "params": { 00:16:25.419 "block_size": 512, 00:16:25.419 "num_blocks": 512, 00:16:25.419 "name": "malloc1" 00:16:25.419 }, 00:16:25.419 "method": "bdev_malloc_create" 00:16:25.419 }, 00:16:25.419 { 00:16:25.419 "method": "bdev_wait_for_examine" 00:16:25.419 } 00:16:25.419 ] 00:16:25.419 } 00:16:25.419 ] 00:16:25.419 } 00:16:25.677 [2024-10-17 19:18:34.683573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.677 [2024-10-17 19:18:34.749097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.677 [2024-10-17 19:18:34.804082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:25.677 [2024-10-17 19:18:34.866947] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:16:25.677 [2024-10-17 19:18:34.867038] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:25.937 [2024-10-17 19:18:34.987474] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:25.937 00:16:25.937 real 0m0.572s 00:16:25.937 user 0m0.369s 00:16:25.937 sys 0m0.160s 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:16:25.937 ************************************ 00:16:25.937 END TEST dd_invalid_seek 00:16:25.937 ************************************ 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:16:25.937 ************************************ 00:16:25.937 START TEST dd_invalid_skip 00:16:25.937 ************************************ 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1125 -- # invalid_skip 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:25.937 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:16:25.937 { 00:16:25.937 "subsystems": [ 00:16:25.937 { 00:16:25.937 "subsystem": "bdev", 00:16:25.937 "config": [ 00:16:25.937 { 00:16:25.937 "params": { 00:16:25.937 "block_size": 512, 00:16:25.937 "num_blocks": 512, 00:16:25.937 "name": "malloc0" 00:16:25.937 }, 00:16:25.937 "method": "bdev_malloc_create" 00:16:25.937 }, 00:16:25.937 { 00:16:25.937 "params": { 00:16:25.937 "block_size": 512, 00:16:25.937 "num_blocks": 512, 00:16:25.937 "name": "malloc1" 00:16:25.937 }, 00:16:25.937 "method": "bdev_malloc_create" 00:16:25.937 }, 00:16:25.937 { 00:16:25.937 "method": "bdev_wait_for_examine" 00:16:25.937 } 00:16:25.937 ] 00:16:25.937 } 00:16:25.937 ] 00:16:25.937 } 00:16:25.937 [2024-10-17 19:18:35.180095] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:16:25.937 [2024-10-17 19:18:35.180273] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62121 ] 00:16:26.197 [2024-10-17 19:18:35.324457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.197 [2024-10-17 19:18:35.392984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.197 [2024-10-17 19:18:35.447724] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:26.454 [2024-10-17 19:18:35.511338] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:16:26.454 [2024-10-17 19:18:35.511401] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:26.454 [2024-10-17 19:18:35.630754] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:16:26.454 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:16:26.454 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:26.454 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:16:26.454 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:16:26.454 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:16:26.454 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:26.454 00:16:26.454 real 0m0.592s 00:16:26.454 user 0m0.393s 00:16:26.454 sys 0m0.159s 00:16:26.454 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:26.454 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:16:26.454 ************************************ 00:16:26.454 END TEST dd_invalid_skip 00:16:26.454 ************************************ 00:16:26.713 19:18:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:16:26.713 19:18:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:26.713 19:18:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:26.713 19:18:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:16:26.713 ************************************ 00:16:26.713 START TEST dd_invalid_input_count 00:16:26.713 ************************************ 00:16:26.713 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1125 -- # invalid_input_count 00:16:26.713 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:16:26.713 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:16:26.713 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:16:26.713 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:16:26.713 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:16:26.713 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:16:26.713 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:16:26.713 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:16:26.713 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:16:26.713 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:16:26.713 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:26.713 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:16:26.713 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:16:26.713 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:26.713 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:26.713 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:26.713 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:26.713 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:26.713 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:26.713 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:26.713 19:18:35 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:16:26.713 [2024-10-17 19:18:35.814520] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:16:26.713 [2024-10-17 19:18:35.814628] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62160 ] 00:16:26.713 { 00:16:26.713 "subsystems": [ 00:16:26.713 { 00:16:26.713 "subsystem": "bdev", 00:16:26.713 "config": [ 00:16:26.713 { 00:16:26.713 "params": { 00:16:26.713 "block_size": 512, 00:16:26.713 "num_blocks": 512, 00:16:26.713 "name": "malloc0" 00:16:26.713 }, 00:16:26.713 "method": "bdev_malloc_create" 00:16:26.713 }, 00:16:26.713 { 00:16:26.713 "params": { 00:16:26.713 "block_size": 512, 00:16:26.713 "num_blocks": 512, 00:16:26.713 "name": "malloc1" 00:16:26.713 }, 00:16:26.713 "method": "bdev_malloc_create" 00:16:26.713 }, 00:16:26.713 { 00:16:26.713 "method": "bdev_wait_for_examine" 00:16:26.713 } 00:16:26.713 ] 00:16:26.713 } 00:16:26.713 ] 00:16:26.713 } 00:16:26.713 [2024-10-17 19:18:35.950108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.971 [2024-10-17 19:18:36.017351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.971 [2024-10-17 19:18:36.071097] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:26.971 [2024-10-17 19:18:36.135340] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:16:26.971 [2024-10-17 19:18:36.135421] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:27.313 [2024-10-17 19:18:36.254964] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:27.313 00:16:27.313 real 0m0.569s 00:16:27.313 user 0m0.372s 00:16:27.313 sys 0m0.163s 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:16:27.313 ************************************ 00:16:27.313 END TEST dd_invalid_input_count 00:16:27.313 ************************************ 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:16:27.313 ************************************ 00:16:27.313 START TEST dd_invalid_output_count 00:16:27.313 ************************************ 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1125 -- # invalid_output_count 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:27.313 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:16:27.313 { 00:16:27.313 "subsystems": [ 00:16:27.313 { 00:16:27.313 "subsystem": "bdev", 00:16:27.313 "config": [ 00:16:27.313 { 00:16:27.313 "params": { 00:16:27.313 "block_size": 512, 00:16:27.313 "num_blocks": 512, 00:16:27.313 "name": "malloc0" 00:16:27.313 }, 00:16:27.313 "method": "bdev_malloc_create" 00:16:27.313 }, 00:16:27.313 { 00:16:27.313 "method": "bdev_wait_for_examine" 00:16:27.313 } 00:16:27.313 ] 00:16:27.313 } 00:16:27.313 ] 00:16:27.313 } 00:16:27.313 [2024-10-17 19:18:36.440273] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:16:27.313 [2024-10-17 19:18:36.440387] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62194 ] 00:16:27.573 [2024-10-17 19:18:36.578093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.573 [2024-10-17 19:18:36.649886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.573 [2024-10-17 19:18:36.703690] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:27.573 [2024-10-17 19:18:36.760361] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:16:27.573 [2024-10-17 19:18:36.760440] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:27.835 [2024-10-17 19:18:36.880610] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:16:27.835 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:16:27.835 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:27.835 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:16:27.835 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:16:27.835 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:16:27.835 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:27.835 00:16:27.835 real 0m0.574s 00:16:27.835 user 0m0.368s 00:16:27.835 sys 0m0.154s 00:16:27.835 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:27.835 19:18:36 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:16:27.835 ************************************ 00:16:27.835 END TEST dd_invalid_output_count 00:16:27.835 ************************************ 00:16:27.835 19:18:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:16:27.835 19:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:27.835 19:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:27.835 19:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:16:27.835 ************************************ 00:16:27.835 START TEST dd_bs_not_multiple 00:16:27.835 ************************************ 00:16:27.835 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1125 -- # bs_not_multiple 00:16:27.835 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:16:27.835 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:16:27.835 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:16:27.835 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:16:27.835 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:16:27.835 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:16:27.835 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:16:27.835 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:16:27.835 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:16:27.835 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:16:27.835 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:16:27.835 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:27.835 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:16:27.835 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:27.835 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:27.835 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:27.835 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:27.835 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:27.835 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:27.835 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:16:27.835 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:16:27.835 { 00:16:27.835 "subsystems": [ 00:16:27.835 { 00:16:27.835 "subsystem": "bdev", 00:16:27.835 "config": [ 00:16:27.835 { 00:16:27.835 "params": { 00:16:27.835 "block_size": 512, 00:16:27.835 "num_blocks": 512, 00:16:27.835 "name": "malloc0" 00:16:27.835 }, 00:16:27.835 "method": "bdev_malloc_create" 00:16:27.835 }, 00:16:27.835 { 00:16:27.835 "params": { 00:16:27.835 "block_size": 512, 00:16:27.835 "num_blocks": 512, 00:16:27.835 "name": "malloc1" 00:16:27.835 }, 00:16:27.835 "method": "bdev_malloc_create" 00:16:27.836 }, 00:16:27.836 { 00:16:27.836 "method": "bdev_wait_for_examine" 00:16:27.836 } 00:16:27.836 ] 00:16:27.836 } 00:16:27.836 ] 00:16:27.836 } 00:16:27.836 [2024-10-17 19:18:37.071212] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:16:27.836 [2024-10-17 19:18:37.071338] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62225 ] 00:16:28.095 [2024-10-17 19:18:37.216001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.095 [2024-10-17 19:18:37.281333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.095 [2024-10-17 19:18:37.335526] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:28.355 [2024-10-17 19:18:37.396765] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:16:28.355 [2024-10-17 19:18:37.396831] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:28.355 [2024-10-17 19:18:37.517963] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:16:28.355 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:16:28.355 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:28.355 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:16:28.355 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:16:28.355 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:16:28.355 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:28.355 00:16:28.355 real 0m0.582s 00:16:28.355 user 0m0.372s 00:16:28.355 sys 0m0.166s 00:16:28.355 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:28.355 ************************************ 00:16:28.355 END TEST dd_bs_not_multiple 00:16:28.355 ************************************ 00:16:28.355 19:18:37 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:16:28.614 00:16:28.614 real 0m6.813s 00:16:28.614 user 0m3.638s 00:16:28.614 sys 0m2.589s 00:16:28.614 19:18:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:28.614 19:18:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:16:28.614 ************************************ 00:16:28.614 END TEST spdk_dd_negative 00:16:28.614 ************************************ 00:16:28.614 00:16:28.614 real 1m21.034s 00:16:28.614 user 0m51.518s 00:16:28.614 sys 0m36.659s 00:16:28.614 19:18:37 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:28.614 19:18:37 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:16:28.614 ************************************ 00:16:28.614 END TEST spdk_dd 00:16:28.614 ************************************ 00:16:28.614 19:18:37 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:16:28.614 19:18:37 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:16:28.614 19:18:37 -- spdk/autotest.sh@256 -- # timing_exit lib 00:16:28.614 19:18:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:28.614 19:18:37 -- common/autotest_common.sh@10 -- # set +x 00:16:28.614 19:18:37 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:16:28.614 19:18:37 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:16:28.614 19:18:37 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:16:28.614 19:18:37 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:16:28.614 19:18:37 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:16:28.614 19:18:37 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:16:28.614 19:18:37 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:16:28.614 19:18:37 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:28.614 19:18:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:28.614 19:18:37 -- common/autotest_common.sh@10 -- # set +x 00:16:28.614 ************************************ 00:16:28.614 START TEST nvmf_tcp 00:16:28.614 ************************************ 00:16:28.614 19:18:37 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:16:28.614 * Looking for test storage... 00:16:28.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:16:28.614 19:18:37 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:28.614 19:18:37 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:16:28.614 19:18:37 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:28.873 19:18:37 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:28.873 19:18:37 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:28.873 19:18:37 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:28.873 19:18:37 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:28.873 19:18:37 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:16:28.873 19:18:37 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:16:28.873 19:18:37 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:16:28.873 19:18:37 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:16:28.873 19:18:37 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:16:28.873 19:18:37 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:16:28.873 19:18:37 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:16:28.873 19:18:37 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:28.873 19:18:37 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:16:28.873 19:18:37 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:16:28.873 19:18:37 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:28.873 19:18:37 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:28.873 19:18:37 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:16:28.873 19:18:37 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:16:28.873 19:18:37 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:28.873 19:18:37 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:16:28.873 19:18:37 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:16:28.873 19:18:37 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:16:28.873 19:18:37 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:16:28.873 19:18:37 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:28.873 19:18:37 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:16:28.873 19:18:37 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:16:28.873 19:18:37 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:28.873 19:18:37 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:28.873 19:18:37 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:16:28.873 19:18:37 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:28.873 19:18:37 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:28.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.873 --rc genhtml_branch_coverage=1 00:16:28.873 --rc genhtml_function_coverage=1 00:16:28.873 --rc genhtml_legend=1 00:16:28.873 --rc geninfo_all_blocks=1 00:16:28.873 --rc geninfo_unexecuted_blocks=1 00:16:28.873 00:16:28.873 ' 00:16:28.873 19:18:37 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:28.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.873 --rc genhtml_branch_coverage=1 00:16:28.873 --rc genhtml_function_coverage=1 00:16:28.873 --rc genhtml_legend=1 00:16:28.873 --rc geninfo_all_blocks=1 00:16:28.873 --rc geninfo_unexecuted_blocks=1 00:16:28.873 00:16:28.873 ' 00:16:28.873 19:18:37 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:28.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.873 --rc genhtml_branch_coverage=1 00:16:28.873 --rc genhtml_function_coverage=1 00:16:28.873 --rc genhtml_legend=1 00:16:28.873 --rc geninfo_all_blocks=1 00:16:28.873 --rc geninfo_unexecuted_blocks=1 00:16:28.873 00:16:28.873 ' 00:16:28.873 19:18:37 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:28.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.873 --rc genhtml_branch_coverage=1 00:16:28.873 --rc genhtml_function_coverage=1 00:16:28.873 --rc genhtml_legend=1 00:16:28.873 --rc geninfo_all_blocks=1 00:16:28.873 --rc geninfo_unexecuted_blocks=1 00:16:28.873 00:16:28.873 ' 00:16:28.873 19:18:37 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:16:28.873 19:18:37 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:16:28.873 19:18:37 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:16:28.873 19:18:37 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:28.873 19:18:37 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:28.873 19:18:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:28.873 ************************************ 00:16:28.873 START TEST nvmf_target_core 00:16:28.873 ************************************ 00:16:28.873 19:18:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:16:28.873 * Looking for test storage... 00:16:28.873 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:16:28.873 19:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:28.873 19:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:16:28.873 19:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:28.873 19:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:28.873 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:28.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.874 --rc genhtml_branch_coverage=1 00:16:28.874 --rc genhtml_function_coverage=1 00:16:28.874 --rc genhtml_legend=1 00:16:28.874 --rc geninfo_all_blocks=1 00:16:28.874 --rc geninfo_unexecuted_blocks=1 00:16:28.874 00:16:28.874 ' 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:28.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.874 --rc genhtml_branch_coverage=1 00:16:28.874 --rc genhtml_function_coverage=1 00:16:28.874 --rc genhtml_legend=1 00:16:28.874 --rc geninfo_all_blocks=1 00:16:28.874 --rc geninfo_unexecuted_blocks=1 00:16:28.874 00:16:28.874 ' 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:28.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.874 --rc genhtml_branch_coverage=1 00:16:28.874 --rc genhtml_function_coverage=1 00:16:28.874 --rc genhtml_legend=1 00:16:28.874 --rc geninfo_all_blocks=1 00:16:28.874 --rc geninfo_unexecuted_blocks=1 00:16:28.874 00:16:28.874 ' 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:28.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.874 --rc genhtml_branch_coverage=1 00:16:28.874 --rc genhtml_function_coverage=1 00:16:28.874 --rc genhtml_legend=1 00:16:28.874 --rc geninfo_all_blocks=1 00:16:28.874 --rc geninfo_unexecuted_blocks=1 00:16:28.874 00:16:28.874 ' 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.874 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:29.134 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:16:29.134 ************************************ 00:16:29.134 START TEST nvmf_host_management 00:16:29.134 ************************************ 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:29.134 * Looking for test storage... 00:16:29.134 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:16:29.134 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:29.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:29.135 --rc genhtml_branch_coverage=1 00:16:29.135 --rc genhtml_function_coverage=1 00:16:29.135 --rc genhtml_legend=1 00:16:29.135 --rc geninfo_all_blocks=1 00:16:29.135 --rc geninfo_unexecuted_blocks=1 00:16:29.135 00:16:29.135 ' 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:29.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:29.135 --rc genhtml_branch_coverage=1 00:16:29.135 --rc genhtml_function_coverage=1 00:16:29.135 --rc genhtml_legend=1 00:16:29.135 --rc geninfo_all_blocks=1 00:16:29.135 --rc geninfo_unexecuted_blocks=1 00:16:29.135 00:16:29.135 ' 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:29.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:29.135 --rc genhtml_branch_coverage=1 00:16:29.135 --rc genhtml_function_coverage=1 00:16:29.135 --rc genhtml_legend=1 00:16:29.135 --rc geninfo_all_blocks=1 00:16:29.135 --rc geninfo_unexecuted_blocks=1 00:16:29.135 00:16:29.135 ' 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:29.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:29.135 --rc genhtml_branch_coverage=1 00:16:29.135 --rc genhtml_function_coverage=1 00:16:29.135 --rc genhtml_legend=1 00:16:29.135 --rc geninfo_all_blocks=1 00:16:29.135 --rc geninfo_unexecuted_blocks=1 00:16:29.135 00:16:29.135 ' 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:29.135 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # nvmf_veth_init 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:29.135 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:29.136 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:29.136 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:29.136 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:29.136 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:29.136 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:29.136 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:29.136 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:29.136 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:29.136 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:29.136 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:29.136 Cannot find device "nvmf_init_br" 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:29.395 Cannot find device "nvmf_init_br2" 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:29.395 Cannot find device "nvmf_tgt_br" 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:29.395 Cannot find device "nvmf_tgt_br2" 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:29.395 Cannot find device "nvmf_init_br" 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:29.395 Cannot find device "nvmf_init_br2" 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:29.395 Cannot find device "nvmf_tgt_br" 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:29.395 Cannot find device "nvmf_tgt_br2" 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:29.395 Cannot find device "nvmf_br" 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:29.395 Cannot find device "nvmf_init_if" 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:29.395 Cannot find device "nvmf_init_if2" 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:29.395 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:29.395 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:29.395 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:29.653 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:29.653 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:29.653 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:29.654 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:29.654 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.131 ms 00:16:29.654 00:16:29.654 --- 10.0.0.3 ping statistics --- 00:16:29.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.654 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:29.654 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:29.654 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:16:29.654 00:16:29.654 --- 10.0.0.4 ping statistics --- 00:16:29.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.654 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:29.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:29.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:16:29.654 00:16:29.654 --- 10.0.0.1 ping statistics --- 00:16:29.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.654 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:29.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:29.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:16:29.654 00:16:29.654 --- 10.0.0.2 ping statistics --- 00:16:29.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.654 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # return 0 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=62565 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 62565 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 62565 ']' 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:29.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:29.654 19:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:29.912 [2024-10-17 19:18:38.938346] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:16:29.912 [2024-10-17 19:18:38.938469] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.912 [2024-10-17 19:18:39.079125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:29.912 [2024-10-17 19:18:39.162762] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.912 [2024-10-17 19:18:39.162841] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.912 [2024-10-17 19:18:39.162856] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:29.912 [2024-10-17 19:18:39.162867] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:29.912 [2024-10-17 19:18:39.162876] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.912 [2024-10-17 19:18:39.164223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:29.912 [2024-10-17 19:18:39.164304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:29.912 [2024-10-17 19:18:39.164440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:16:29.912 [2024-10-17 19:18:39.164451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.170 [2024-10-17 19:18:39.223141] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:31.105 19:18:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:31.105 19:18:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:16:31.105 19:18:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:31.105 19:18:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:31.105 19:18:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:31.105 [2024-10-17 19:18:40.042321] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:31.105 Malloc0 00:16:31.105 [2024-10-17 19:18:40.125057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:31.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62619 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62619 /var/tmp/bdevperf.sock 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 62619 ']' 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:31.105 { 00:16:31.105 "params": { 00:16:31.105 "name": "Nvme$subsystem", 00:16:31.105 "trtype": "$TEST_TRANSPORT", 00:16:31.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:31.105 "adrfam": "ipv4", 00:16:31.105 "trsvcid": "$NVMF_PORT", 00:16:31.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:31.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:31.105 "hdgst": ${hdgst:-false}, 00:16:31.105 "ddgst": ${ddgst:-false} 00:16:31.105 }, 00:16:31.105 "method": "bdev_nvme_attach_controller" 00:16:31.105 } 00:16:31.105 EOF 00:16:31.105 )") 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:16:31.105 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:16:31.105 "params": { 00:16:31.105 "name": "Nvme0", 00:16:31.105 "trtype": "tcp", 00:16:31.105 "traddr": "10.0.0.3", 00:16:31.105 "adrfam": "ipv4", 00:16:31.105 "trsvcid": "4420", 00:16:31.105 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:31.105 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:31.105 "hdgst": false, 00:16:31.105 "ddgst": false 00:16:31.105 }, 00:16:31.105 "method": "bdev_nvme_attach_controller" 00:16:31.105 }' 00:16:31.105 [2024-10-17 19:18:40.223794] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:16:31.105 [2024-10-17 19:18:40.223899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62619 ] 00:16:31.364 [2024-10-17 19:18:40.363301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.364 [2024-10-17 19:18:40.436217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.364 [2024-10-17 19:18:40.502885] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:31.685 Running I/O for 10 seconds... 00:16:31.685 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:31.685 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:16:31.685 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:31.685 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.685 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:31.685 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.685 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:31.685 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:31.685 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:31.685 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:31.685 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:31.685 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:31.685 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:31.685 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:31.685 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:31.685 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:31.685 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.685 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:31.685 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.685 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:16:31.685 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:16:31.685 19:18:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:31.946 19:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:31.946 19:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:31.946 19:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:31.946 19:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:31.946 19:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.946 19:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:31.946 19:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.946 19:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:16:31.946 19:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:16:31.946 19:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:31.946 19:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:31.946 19:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:31.946 19:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:31.946 19:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.946 19:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:31.946 [2024-10-17 19:18:41.058331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.058940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254c020 is same with the state(6) to be set 00:16:31.946 [2024-10-17 19:18:41.059060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.946 [2024-10-17 19:18:41.059093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.946 [2024-10-17 19:18:41.059121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.946 [2024-10-17 19:18:41.059150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.946 [2024-10-17 19:18:41.059165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.946 [2024-10-17 19:18:41.059175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.059981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.059993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.060003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.060014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.060024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.060035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.060044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.060056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.947 [2024-10-17 19:18:41.060065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.947 [2024-10-17 19:18:41.060077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.948 [2024-10-17 19:18:41.060086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.948 [2024-10-17 19:18:41.060097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.948 [2024-10-17 19:18:41.060112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.948 [2024-10-17 19:18:41.060124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.948 [2024-10-17 19:18:41.060144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.948 [2024-10-17 19:18:41.060157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.948 [2024-10-17 19:18:41.060167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.948 [2024-10-17 19:18:41.060179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.948 [2024-10-17 19:18:41.060189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.948 [2024-10-17 19:18:41.060201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.948 [2024-10-17 19:18:41.060211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.948 [2024-10-17 19:18:41.060223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.948 [2024-10-17 19:18:41.060232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.948 [2024-10-17 19:18:41.060244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.948 [2024-10-17 19:18:41.060254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.948 [2024-10-17 19:18:41.060266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.948 [2024-10-17 19:18:41.060275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.948 [2024-10-17 19:18:41.060286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.948 [2024-10-17 19:18:41.060296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.948 [2024-10-17 19:18:41.060308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.948 [2024-10-17 19:18:41.060317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.948 [2024-10-17 19:18:41.060329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.948 [2024-10-17 19:18:41.060339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.948 [2024-10-17 19:18:41.060351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.948 [2024-10-17 19:18:41.060360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.948 [2024-10-17 19:18:41.060372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.948 [2024-10-17 19:18:41.060381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.948 [2024-10-17 19:18:41.060392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.948 [2024-10-17 19:18:41.060402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.948 [2024-10-17 19:18:41.060413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.948 [2024-10-17 19:18:41.060422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.948 [2024-10-17 19:18:41.060434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.948 [2024-10-17 19:18:41.060443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.948 [2024-10-17 19:18:41.060456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.948 [2024-10-17 19:18:41.060471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.948 [2024-10-17 19:18:41.060483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.948 [2024-10-17 19:18:41.060492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.948 [2024-10-17 19:18:41.060504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.948 [2024-10-17 19:18:41.060513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.948 [2024-10-17 19:18:41.060524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d77d0 is same with the state(6) to be set 00:16:31.948 [2024-10-17 19:18:41.060605] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x5d77d0 was disconnected and freed. reset controller. 00:16:31.948 [2024-10-17 19:18:41.060683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.948 [2024-10-17 19:18:41.060699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.948 [2024-10-17 19:18:41.060711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.948 [2024-10-17 19:18:41.060720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.948 [2024-10-17 19:18:41.060731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.948 [2024-10-17 19:18:41.060740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.948 [2024-10-17 19:18:41.060751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.948 [2024-10-17 19:18:41.060760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.948 [2024-10-17 19:18:41.060770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d7b20 is same with the state(6) to be set 00:16:31.948 [2024-10-17 19:18:41.061939] nvme_ctrlr.c:1770:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:31.948 19:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.948 19:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:31.948 19:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.948 19:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:31.948 task offset: 73728 on job bdev=Nvme0n1 fails 00:16:31.948 00:16:31.948 Latency(us) 00:16:31.948 [2024-10-17T19:18:41.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.948 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:31.948 Job: Nvme0n1 ended in about 0.43 seconds with error 00:16:31.948 Verification LBA range: start 0x0 length 0x400 00:16:31.948 Nvme0n1 : 0.43 1335.73 83.48 148.41 0.00 41656.39 4766.25 40274.85 00:16:31.948 [2024-10-17T19:18:41.206Z] =================================================================================================================== 00:16:31.948 [2024-10-17T19:18:41.206Z] Total : 1335.73 83.48 148.41 0.00 41656.39 4766.25 40274.85 00:16:31.948 [2024-10-17 19:18:41.064420] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:31.948 [2024-10-17 19:18:41.064444] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d7b20 (9): Bad file descriptor 00:16:31.948 [2024-10-17 19:18:41.065925] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:16:31.948 [2024-10-17 19:18:41.066055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:31.948 [2024-10-17 19:18:41.066082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.948 [2024-10-17 19:18:41.066099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:16:31.948 [2024-10-17 19:18:41.066110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:16:31.948 [2024-10-17 19:18:41.066121] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:16:31.948 [2024-10-17 19:18:41.066165] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5d7b20 00:16:31.948 [2024-10-17 19:18:41.066208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d7b20 (9): Bad file descriptor 00:16:31.948 [2024-10-17 19:18:41.066227] nvme_ctrlr.c:4250:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:31.948 [2024-10-17 19:18:41.066238] nvme_ctrlr.c:1868:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:31.948 [2024-10-17 19:18:41.066249] nvme_ctrlr.c:1152:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:31.948 [2024-10-17 19:18:41.066267] bdev_nvme.c:2213:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:31.948 19:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.948 19:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:32.882 19:18:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62619 00:16:32.882 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62619) - No such process 00:16:32.882 19:18:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:32.882 19:18:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:32.882 19:18:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:32.882 19:18:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:32.882 19:18:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:16:32.882 19:18:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:16:32.882 19:18:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:32.882 19:18:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:32.882 { 00:16:32.882 "params": { 00:16:32.882 "name": "Nvme$subsystem", 00:16:32.882 "trtype": "$TEST_TRANSPORT", 00:16:32.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.882 "adrfam": "ipv4", 00:16:32.882 "trsvcid": "$NVMF_PORT", 00:16:32.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.882 "hdgst": ${hdgst:-false}, 00:16:32.882 "ddgst": ${ddgst:-false} 00:16:32.882 }, 00:16:32.882 "method": "bdev_nvme_attach_controller" 00:16:32.882 } 00:16:32.882 EOF 00:16:32.882 )") 00:16:32.882 19:18:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:16:32.882 19:18:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:16:32.882 19:18:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:16:32.882 19:18:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:16:32.882 "params": { 00:16:32.882 "name": "Nvme0", 00:16:32.882 "trtype": "tcp", 00:16:32.882 "traddr": "10.0.0.3", 00:16:32.882 "adrfam": "ipv4", 00:16:32.882 "trsvcid": "4420", 00:16:32.882 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:32.882 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:32.882 "hdgst": false, 00:16:32.882 "ddgst": false 00:16:32.882 }, 00:16:32.882 "method": "bdev_nvme_attach_controller" 00:16:32.882 }' 00:16:33.139 [2024-10-17 19:18:42.142056] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:16:33.139 [2024-10-17 19:18:42.142180] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62659 ] 00:16:33.139 [2024-10-17 19:18:42.288061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.139 [2024-10-17 19:18:42.361741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.397 [2024-10-17 19:18:42.427699] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:33.397 Running I/O for 1 seconds... 00:16:34.335 1472.00 IOPS, 92.00 MiB/s 00:16:34.335 Latency(us) 00:16:34.335 [2024-10-17T19:18:43.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.335 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.335 Verification LBA range: start 0x0 length 0x400 00:16:34.335 Nvme0n1 : 1.02 1506.25 94.14 0.00 0.00 41658.87 4200.26 39559.91 00:16:34.335 [2024-10-17T19:18:43.593Z] =================================================================================================================== 00:16:34.335 [2024-10-17T19:18:43.593Z] Total : 1506.25 94.14 0.00 0.00 41658.87 4200.26 39559.91 00:16:34.593 19:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:34.593 19:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:34.593 19:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:16:34.593 19:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:16:34.593 19:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:34.593 19:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:34.593 19:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:16:34.593 19:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:34.593 19:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:16:34.593 19:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:34.593 19:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:34.593 rmmod nvme_tcp 00:16:34.593 rmmod nvme_fabrics 00:16:34.850 rmmod nvme_keyring 00:16:34.850 19:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:34.850 19:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:16:34.850 19:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:16:34.850 19:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 62565 ']' 00:16:34.850 19:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 62565 00:16:34.850 19:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 62565 ']' 00:16:34.850 19:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 62565 00:16:34.850 19:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:16:34.850 19:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:34.850 19:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62565 00:16:34.850 19:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:34.850 19:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:34.850 19:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62565' 00:16:34.850 killing process with pid 62565 00:16:34.850 19:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 62565 00:16:34.850 19:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 62565 00:16:35.108 [2024-10-17 19:18:44.176553] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:35.108 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:35.108 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:35.108 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:35.108 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:16:35.108 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:16:35.108 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:35.108 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:16:35.108 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:35.108 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:35.108 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:35.108 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:35.108 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:35.108 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:35.108 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:35.108 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:35.108 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:35.108 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:35.108 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:35.108 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:35.366 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:35.366 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:35.366 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:35.366 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:35.366 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.366 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.366 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.366 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:16:35.367 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:35.367 00:16:35.367 real 0m6.301s 00:16:35.367 user 0m22.754s 00:16:35.367 sys 0m1.547s 00:16:35.367 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:35.367 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:35.367 ************************************ 00:16:35.367 END TEST nvmf_host_management 00:16:35.367 ************************************ 00:16:35.367 19:18:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:35.367 19:18:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:35.367 19:18:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:35.367 19:18:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:16:35.367 ************************************ 00:16:35.367 START TEST nvmf_lvol 00:16:35.367 ************************************ 00:16:35.367 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:35.367 * Looking for test storage... 00:16:35.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:35.367 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:35.367 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:16:35.367 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:35.626 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:35.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.626 --rc genhtml_branch_coverage=1 00:16:35.626 --rc genhtml_function_coverage=1 00:16:35.627 --rc genhtml_legend=1 00:16:35.627 --rc geninfo_all_blocks=1 00:16:35.627 --rc geninfo_unexecuted_blocks=1 00:16:35.627 00:16:35.627 ' 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:35.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.627 --rc genhtml_branch_coverage=1 00:16:35.627 --rc genhtml_function_coverage=1 00:16:35.627 --rc genhtml_legend=1 00:16:35.627 --rc geninfo_all_blocks=1 00:16:35.627 --rc geninfo_unexecuted_blocks=1 00:16:35.627 00:16:35.627 ' 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:35.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.627 --rc genhtml_branch_coverage=1 00:16:35.627 --rc genhtml_function_coverage=1 00:16:35.627 --rc genhtml_legend=1 00:16:35.627 --rc geninfo_all_blocks=1 00:16:35.627 --rc geninfo_unexecuted_blocks=1 00:16:35.627 00:16:35.627 ' 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:35.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.627 --rc genhtml_branch_coverage=1 00:16:35.627 --rc genhtml_function_coverage=1 00:16:35.627 --rc genhtml_legend=1 00:16:35.627 --rc geninfo_all_blocks=1 00:16:35.627 --rc geninfo_unexecuted_blocks=1 00:16:35.627 00:16:35.627 ' 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:35.627 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # nvmf_veth_init 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:35.627 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:35.628 Cannot find device "nvmf_init_br" 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:35.628 Cannot find device "nvmf_init_br2" 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:35.628 Cannot find device "nvmf_tgt_br" 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:35.628 Cannot find device "nvmf_tgt_br2" 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:35.628 Cannot find device "nvmf_init_br" 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:35.628 Cannot find device "nvmf_init_br2" 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:35.628 Cannot find device "nvmf_tgt_br" 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:35.628 Cannot find device "nvmf_tgt_br2" 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:35.628 Cannot find device "nvmf_br" 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:35.628 Cannot find device "nvmf_init_if" 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:35.628 Cannot find device "nvmf_init_if2" 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:35.628 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:35.628 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:35.628 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:35.887 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:35.887 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:35.887 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:35.887 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:35.887 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:35.887 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:35.887 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:35.887 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:35.887 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:35.887 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:35.887 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:35.887 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:35.887 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:35.887 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:35.887 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:35.887 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:35.887 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:35.887 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:35.887 19:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:35.887 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:35.887 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:16:35.887 00:16:35.887 --- 10.0.0.3 ping statistics --- 00:16:35.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.887 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:35.887 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:35.887 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:16:35.887 00:16:35.887 --- 10.0.0.4 ping statistics --- 00:16:35.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.887 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:35.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:35.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:16:35.887 00:16:35.887 --- 10.0.0.1 ping statistics --- 00:16:35.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.887 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:35.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:16:35.887 00:16:35.887 --- 10.0.0.2 ping statistics --- 00:16:35.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.887 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # return 0 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=62929 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 62929 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 62929 ']' 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:35.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:35.887 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:36.145 [2024-10-17 19:18:45.169101] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:16:36.146 [2024-10-17 19:18:45.169233] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.146 [2024-10-17 19:18:45.312185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:36.404 [2024-10-17 19:18:45.405000] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:36.404 [2024-10-17 19:18:45.405075] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:36.404 [2024-10-17 19:18:45.405090] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:36.404 [2024-10-17 19:18:45.405100] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:36.404 [2024-10-17 19:18:45.405110] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:36.404 [2024-10-17 19:18:45.406338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:36.404 [2024-10-17 19:18:45.406499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:36.404 [2024-10-17 19:18:45.406505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.404 [2024-10-17 19:18:45.463256] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:36.404 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:36.404 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:16:36.404 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:36.404 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:36.404 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:36.404 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.404 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:36.662 [2024-10-17 19:18:45.905145] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:36.920 19:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:37.178 19:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:37.178 19:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:37.436 19:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:37.436 19:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:37.714 19:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:37.972 19:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=dc11844e-ccd0-4127-add7-d63b7fe85c87 00:16:37.972 19:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dc11844e-ccd0-4127-add7-d63b7fe85c87 lvol 20 00:16:38.539 19:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=6932f9e0-7c69-4ac7-970a-2bb2cf48156c 00:16:38.539 19:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:38.539 19:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6932f9e0-7c69-4ac7-970a-2bb2cf48156c 00:16:39.105 19:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:39.105 [2024-10-17 19:18:48.341186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:39.364 19:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:39.621 19:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62997 00:16:39.621 19:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:39.621 19:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:40.555 19:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 6932f9e0-7c69-4ac7-970a-2bb2cf48156c MY_SNAPSHOT 00:16:40.813 19:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=0f7e3111-7320-4351-acae-2d9b9f4a49ba 00:16:40.813 19:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 6932f9e0-7c69-4ac7-970a-2bb2cf48156c 30 00:16:41.379 19:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 0f7e3111-7320-4351-acae-2d9b9f4a49ba MY_CLONE 00:16:41.637 19:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=9bae8817-b235-4003-b594-1895668a47e3 00:16:41.638 19:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 9bae8817-b235-4003-b594-1895668a47e3 00:16:42.204 19:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62997 00:16:50.411 Initializing NVMe Controllers 00:16:50.411 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:16:50.411 Controller IO queue size 128, less than required. 00:16:50.411 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:50.411 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:50.411 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:50.411 Initialization complete. Launching workers. 00:16:50.411 ======================================================== 00:16:50.411 Latency(us) 00:16:50.411 Device Information : IOPS MiB/s Average min max 00:16:50.411 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 6451.70 25.20 19855.58 2227.23 84411.82 00:16:50.411 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 6932.90 27.08 18481.39 3368.05 81792.17 00:16:50.411 ======================================================== 00:16:50.411 Total : 13384.60 52.28 19143.78 2227.23 84411.82 00:16:50.411 00:16:50.411 19:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:50.411 19:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6932f9e0-7c69-4ac7-970a-2bb2cf48156c 00:16:50.411 19:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dc11844e-ccd0-4127-add7-d63b7fe85c87 00:16:50.670 19:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:50.670 19:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:50.670 19:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:50.670 19:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:50.670 19:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:16:50.670 19:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:50.670 19:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:16:50.670 19:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:50.670 19:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:50.670 rmmod nvme_tcp 00:16:50.928 rmmod nvme_fabrics 00:16:50.928 rmmod nvme_keyring 00:16:50.928 19:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:50.928 19:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:16:50.928 19:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:16:50.928 19:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 62929 ']' 00:16:50.928 19:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 62929 00:16:50.929 19:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 62929 ']' 00:16:50.929 19:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 62929 00:16:50.929 19:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:16:50.929 19:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:50.929 19:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62929 00:16:50.929 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:50.929 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:50.929 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62929' 00:16:50.929 killing process with pid 62929 00:16:50.929 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 62929 00:16:50.929 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 62929 00:16:51.188 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:51.188 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:51.188 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:51.188 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:16:51.188 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:16:51.188 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:51.188 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:16:51.188 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:51.188 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:51.188 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:51.188 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:51.188 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:51.188 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:51.188 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:51.188 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:51.188 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:51.188 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:51.188 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:51.188 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:51.188 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:51.188 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:51.447 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:51.447 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:51.447 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.447 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:51.447 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.447 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:16:51.447 00:16:51.447 real 0m16.013s 00:16:51.447 user 1m6.185s 00:16:51.447 sys 0m4.244s 00:16:51.447 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:51.447 ************************************ 00:16:51.447 END TEST nvmf_lvol 00:16:51.447 ************************************ 00:16:51.447 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:51.447 19:19:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:51.447 19:19:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:51.447 19:19:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:51.447 19:19:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:16:51.447 ************************************ 00:16:51.447 START TEST nvmf_lvs_grow 00:16:51.447 ************************************ 00:16:51.447 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:51.447 * Looking for test storage... 00:16:51.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:51.447 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:51.447 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:16:51.447 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:51.707 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:51.707 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:51.707 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:51.707 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:51.707 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:16:51.707 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:16:51.707 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:16:51.707 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:16:51.707 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:16:51.707 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:16:51.707 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:16:51.707 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:51.707 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:16:51.707 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:16:51.707 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:51.707 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:51.707 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:51.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.708 --rc genhtml_branch_coverage=1 00:16:51.708 --rc genhtml_function_coverage=1 00:16:51.708 --rc genhtml_legend=1 00:16:51.708 --rc geninfo_all_blocks=1 00:16:51.708 --rc geninfo_unexecuted_blocks=1 00:16:51.708 00:16:51.708 ' 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:51.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.708 --rc genhtml_branch_coverage=1 00:16:51.708 --rc genhtml_function_coverage=1 00:16:51.708 --rc genhtml_legend=1 00:16:51.708 --rc geninfo_all_blocks=1 00:16:51.708 --rc geninfo_unexecuted_blocks=1 00:16:51.708 00:16:51.708 ' 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:51.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.708 --rc genhtml_branch_coverage=1 00:16:51.708 --rc genhtml_function_coverage=1 00:16:51.708 --rc genhtml_legend=1 00:16:51.708 --rc geninfo_all_blocks=1 00:16:51.708 --rc geninfo_unexecuted_blocks=1 00:16:51.708 00:16:51.708 ' 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:51.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.708 --rc genhtml_branch_coverage=1 00:16:51.708 --rc genhtml_function_coverage=1 00:16:51.708 --rc genhtml_legend=1 00:16:51.708 --rc geninfo_all_blocks=1 00:16:51.708 --rc geninfo_unexecuted_blocks=1 00:16:51.708 00:16:51.708 ' 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:51.708 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # nvmf_veth_init 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:51.708 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:51.709 Cannot find device "nvmf_init_br" 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:51.709 Cannot find device "nvmf_init_br2" 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:51.709 Cannot find device "nvmf_tgt_br" 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:51.709 Cannot find device "nvmf_tgt_br2" 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:51.709 Cannot find device "nvmf_init_br" 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:51.709 Cannot find device "nvmf_init_br2" 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:51.709 Cannot find device "nvmf_tgt_br" 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:51.709 Cannot find device "nvmf_tgt_br2" 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:51.709 Cannot find device "nvmf_br" 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:51.709 Cannot find device "nvmf_init_if" 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:51.709 Cannot find device "nvmf_init_if2" 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:51.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:51.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:51.709 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:51.968 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:51.968 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:51.968 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:51.968 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:51.968 19:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:51.968 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:51.968 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:51.968 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:51.968 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:51.968 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:51.968 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:51.968 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:51.968 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:51.968 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:51.968 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:51.968 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:51.968 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:51.968 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:51.968 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:51.968 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:51.968 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:51.968 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:51.968 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:51.968 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:51.968 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:51.968 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:51.968 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:51.968 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:51.968 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:51.968 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:51.968 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:51.968 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:51.968 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:16:51.968 00:16:51.968 --- 10.0.0.3 ping statistics --- 00:16:51.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.968 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:16:51.969 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:51.969 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:51.969 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:16:51.969 00:16:51.969 --- 10.0.0.4 ping statistics --- 00:16:51.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.969 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:16:51.969 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:51.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:51.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:16:51.969 00:16:51.969 --- 10.0.0.1 ping statistics --- 00:16:51.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.969 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:16:51.969 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:51.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:51.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:16:51.969 00:16:51.969 --- 10.0.0.2 ping statistics --- 00:16:51.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.969 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:16:51.969 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:51.969 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # return 0 00:16:51.969 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:51.969 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:51.969 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:51.969 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:51.969 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:51.969 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:51.969 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:51.969 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:51.969 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:51.969 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:51.969 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:51.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.969 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=63391 00:16:51.969 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 63391 00:16:51.969 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:51.969 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 63391 ']' 00:16:51.969 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.969 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:51.969 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.969 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:51.969 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:52.227 [2024-10-17 19:19:01.272107] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:16:52.227 [2024-10-17 19:19:01.272396] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.227 [2024-10-17 19:19:01.405616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.486 [2024-10-17 19:19:01.489814] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.486 [2024-10-17 19:19:01.490149] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.486 [2024-10-17 19:19:01.490378] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.486 [2024-10-17 19:19:01.490537] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.486 [2024-10-17 19:19:01.490734] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.486 [2024-10-17 19:19:01.491304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.486 [2024-10-17 19:19:01.550172] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:52.486 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:52.486 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:16:52.486 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:52.486 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:52.486 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:52.486 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.486 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:52.744 [2024-10-17 19:19:01.913085] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.744 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:52.744 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:52.744 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:52.744 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:52.744 ************************************ 00:16:52.744 START TEST lvs_grow_clean 00:16:52.744 ************************************ 00:16:52.744 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:16:52.744 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:52.744 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:52.744 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:52.744 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:52.744 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:52.744 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:52.744 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:52.744 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:52.744 19:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:53.311 19:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:53.311 19:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:53.569 19:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=21a91abf-bc58-413f-9a91-f898ceba696d 00:16:53.569 19:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21a91abf-bc58-413f-9a91-f898ceba696d 00:16:53.569 19:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:53.827 19:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:53.827 19:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:53.827 19:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 21a91abf-bc58-413f-9a91-f898ceba696d lvol 150 00:16:54.085 19:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=60b68233-7510-45fb-9346-4e09bdfab3cd 00:16:54.085 19:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:54.085 19:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:54.343 [2024-10-17 19:19:03.485283] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:54.343 [2024-10-17 19:19:03.485397] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:54.343 true 00:16:54.343 19:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:54.343 19:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21a91abf-bc58-413f-9a91-f898ceba696d 00:16:54.601 19:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:54.601 19:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:54.860 19:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 60b68233-7510-45fb-9346-4e09bdfab3cd 00:16:55.428 19:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:55.428 [2024-10-17 19:19:04.667604] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:55.687 19:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:55.945 19:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:55.945 19:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63466 00:16:55.945 19:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:55.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:55.945 19:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63466 /var/tmp/bdevperf.sock 00:16:55.945 19:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 63466 ']' 00:16:55.945 19:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:55.945 19:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:55.945 19:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:55.945 19:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:55.945 19:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:55.945 [2024-10-17 19:19:05.000685] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:16:55.945 [2024-10-17 19:19:05.000992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63466 ] 00:16:55.945 [2024-10-17 19:19:05.136724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.205 [2024-10-17 19:19:05.215667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.205 [2024-10-17 19:19:05.288900] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:56.205 19:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:56.205 19:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:16:56.205 19:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:56.463 Nvme0n1 00:16:56.788 19:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:56.788 [ 00:16:56.788 { 00:16:56.788 "name": "Nvme0n1", 00:16:56.788 "aliases": [ 00:16:56.788 "60b68233-7510-45fb-9346-4e09bdfab3cd" 00:16:56.788 ], 00:16:56.788 "product_name": "NVMe disk", 00:16:56.788 "block_size": 4096, 00:16:56.788 "num_blocks": 38912, 00:16:56.788 "uuid": "60b68233-7510-45fb-9346-4e09bdfab3cd", 00:16:56.788 "numa_id": -1, 00:16:56.788 "assigned_rate_limits": { 00:16:56.788 "rw_ios_per_sec": 0, 00:16:56.788 "rw_mbytes_per_sec": 0, 00:16:56.788 "r_mbytes_per_sec": 0, 00:16:56.788 "w_mbytes_per_sec": 0 00:16:56.788 }, 00:16:56.788 "claimed": false, 00:16:56.788 "zoned": false, 00:16:56.788 "supported_io_types": { 00:16:56.788 "read": true, 00:16:56.788 "write": true, 00:16:56.788 "unmap": true, 00:16:56.788 "flush": true, 00:16:56.788 "reset": true, 00:16:56.788 "nvme_admin": true, 00:16:56.789 "nvme_io": true, 00:16:56.789 "nvme_io_md": false, 00:16:56.789 "write_zeroes": true, 00:16:56.789 "zcopy": false, 00:16:56.789 "get_zone_info": false, 00:16:56.789 "zone_management": false, 00:16:56.789 "zone_append": false, 00:16:56.789 "compare": true, 00:16:56.789 "compare_and_write": true, 00:16:56.789 "abort": true, 00:16:56.789 "seek_hole": false, 00:16:56.789 "seek_data": false, 00:16:56.789 "copy": true, 00:16:56.789 "nvme_iov_md": false 00:16:56.789 }, 00:16:56.789 "memory_domains": [ 00:16:56.789 { 00:16:56.789 "dma_device_id": "system", 00:16:56.789 "dma_device_type": 1 00:16:56.789 } 00:16:56.789 ], 00:16:56.789 "driver_specific": { 00:16:56.789 "nvme": [ 00:16:56.789 { 00:16:56.789 "trid": { 00:16:56.789 "trtype": "TCP", 00:16:56.789 "adrfam": "IPv4", 00:16:56.789 "traddr": "10.0.0.3", 00:16:56.789 "trsvcid": "4420", 00:16:56.789 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:56.789 }, 00:16:56.789 "ctrlr_data": { 00:16:56.789 "cntlid": 1, 00:16:56.789 "vendor_id": "0x8086", 00:16:56.789 "model_number": "SPDK bdev Controller", 00:16:56.789 "serial_number": "SPDK0", 00:16:56.789 "firmware_revision": "25.01", 00:16:56.789 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:56.789 "oacs": { 00:16:56.789 "security": 0, 00:16:56.789 "format": 0, 00:16:56.789 "firmware": 0, 00:16:56.789 "ns_manage": 0 00:16:56.789 }, 00:16:56.789 "multi_ctrlr": true, 00:16:56.789 "ana_reporting": false 00:16:56.789 }, 00:16:56.789 "vs": { 00:16:56.789 "nvme_version": "1.3" 00:16:56.789 }, 00:16:56.789 "ns_data": { 00:16:56.789 "id": 1, 00:16:56.789 "can_share": true 00:16:56.789 } 00:16:56.789 } 00:16:56.789 ], 00:16:56.789 "mp_policy": "active_passive" 00:16:56.789 } 00:16:56.789 } 00:16:56.789 ] 00:16:56.789 19:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63482 00:16:56.789 19:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:56.789 19:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:57.046 Running I/O for 10 seconds... 00:16:57.984 Latency(us) 00:16:57.984 [2024-10-17T19:19:07.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.984 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:57.984 Nvme0n1 : 1.00 7227.00 28.23 0.00 0.00 0.00 0.00 0.00 00:16:57.984 [2024-10-17T19:19:07.242Z] =================================================================================================================== 00:16:57.984 [2024-10-17T19:19:07.242Z] Total : 7227.00 28.23 0.00 0.00 0.00 0.00 0.00 00:16:57.984 00:16:58.919 19:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 21a91abf-bc58-413f-9a91-f898ceba696d 00:16:58.919 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:58.919 Nvme0n1 : 2.00 7106.00 27.76 0.00 0.00 0.00 0.00 0.00 00:16:58.919 [2024-10-17T19:19:08.177Z] =================================================================================================================== 00:16:58.919 [2024-10-17T19:19:08.177Z] Total : 7106.00 27.76 0.00 0.00 0.00 0.00 0.00 00:16:58.919 00:16:59.177 true 00:16:59.177 19:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21a91abf-bc58-413f-9a91-f898ceba696d 00:16:59.177 19:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:59.436 19:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:59.436 19:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:59.436 19:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63482 00:17:00.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:00.003 Nvme0n1 : 3.00 7108.00 27.77 0.00 0.00 0.00 0.00 0.00 00:17:00.003 [2024-10-17T19:19:09.261Z] =================================================================================================================== 00:17:00.003 [2024-10-17T19:19:09.261Z] Total : 7108.00 27.77 0.00 0.00 0.00 0.00 0.00 00:17:00.003 00:17:00.938 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:00.938 Nvme0n1 : 4.00 6950.25 27.15 0.00 0.00 0.00 0.00 0.00 00:17:00.938 [2024-10-17T19:19:10.196Z] =================================================================================================================== 00:17:00.938 [2024-10-17T19:19:10.196Z] Total : 6950.25 27.15 0.00 0.00 0.00 0.00 0.00 00:17:00.938 00:17:01.905 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:01.905 Nvme0n1 : 5.00 6957.20 27.18 0.00 0.00 0.00 0.00 0.00 00:17:01.905 [2024-10-17T19:19:11.163Z] =================================================================================================================== 00:17:01.905 [2024-10-17T19:19:11.163Z] Total : 6957.20 27.18 0.00 0.00 0.00 0.00 0.00 00:17:01.905 00:17:03.281 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:03.281 Nvme0n1 : 6.00 6961.83 27.19 0.00 0.00 0.00 0.00 0.00 00:17:03.281 [2024-10-17T19:19:12.539Z] =================================================================================================================== 00:17:03.281 [2024-10-17T19:19:12.539Z] Total : 6961.83 27.19 0.00 0.00 0.00 0.00 0.00 00:17:03.281 00:17:04.215 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:04.215 Nvme0n1 : 7.00 6928.86 27.07 0.00 0.00 0.00 0.00 0.00 00:17:04.215 [2024-10-17T19:19:13.473Z] =================================================================================================================== 00:17:04.215 [2024-10-17T19:19:13.473Z] Total : 6928.86 27.07 0.00 0.00 0.00 0.00 0.00 00:17:04.215 00:17:05.148 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:05.148 Nvme0n1 : 8.00 6935.88 27.09 0.00 0.00 0.00 0.00 0.00 00:17:05.148 [2024-10-17T19:19:14.406Z] =================================================================================================================== 00:17:05.148 [2024-10-17T19:19:14.406Z] Total : 6935.88 27.09 0.00 0.00 0.00 0.00 0.00 00:17:05.148 00:17:06.084 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:06.084 Nvme0n1 : 9.00 6927.22 27.06 0.00 0.00 0.00 0.00 0.00 00:17:06.084 [2024-10-17T19:19:15.342Z] =================================================================================================================== 00:17:06.084 [2024-10-17T19:19:15.342Z] Total : 6927.22 27.06 0.00 0.00 0.00 0.00 0.00 00:17:06.084 00:17:07.019 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:07.019 Nvme0n1 : 10.00 6920.30 27.03 0.00 0.00 0.00 0.00 0.00 00:17:07.019 [2024-10-17T19:19:16.277Z] =================================================================================================================== 00:17:07.019 [2024-10-17T19:19:16.277Z] Total : 6920.30 27.03 0.00 0.00 0.00 0.00 0.00 00:17:07.019 00:17:07.019 00:17:07.019 Latency(us) 00:17:07.019 [2024-10-17T19:19:16.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.019 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:07.019 Nvme0n1 : 10.01 6928.64 27.07 0.00 0.00 18470.73 5451.40 105810.85 00:17:07.019 [2024-10-17T19:19:16.277Z] =================================================================================================================== 00:17:07.019 [2024-10-17T19:19:16.277Z] Total : 6928.64 27.07 0.00 0.00 18470.73 5451.40 105810.85 00:17:07.019 { 00:17:07.019 "results": [ 00:17:07.019 { 00:17:07.019 "job": "Nvme0n1", 00:17:07.019 "core_mask": "0x2", 00:17:07.019 "workload": "randwrite", 00:17:07.019 "status": "finished", 00:17:07.019 "queue_depth": 128, 00:17:07.019 "io_size": 4096, 00:17:07.019 "runtime": 10.006434, 00:17:07.019 "iops": 6928.642111665355, 00:17:07.019 "mibps": 27.06500824869279, 00:17:07.019 "io_failed": 0, 00:17:07.019 "io_timeout": 0, 00:17:07.019 "avg_latency_us": 18470.727945022623, 00:17:07.019 "min_latency_us": 5451.403636363636, 00:17:07.019 "max_latency_us": 105810.8509090909 00:17:07.019 } 00:17:07.019 ], 00:17:07.019 "core_count": 1 00:17:07.019 } 00:17:07.019 19:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63466 00:17:07.019 19:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 63466 ']' 00:17:07.019 19:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 63466 00:17:07.019 19:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:17:07.019 19:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:07.019 19:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63466 00:17:07.019 killing process with pid 63466 00:17:07.019 Received shutdown signal, test time was about 10.000000 seconds 00:17:07.019 00:17:07.019 Latency(us) 00:17:07.019 [2024-10-17T19:19:16.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.019 [2024-10-17T19:19:16.277Z] =================================================================================================================== 00:17:07.019 [2024-10-17T19:19:16.277Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:07.019 19:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:07.019 19:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:07.019 19:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63466' 00:17:07.019 19:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 63466 00:17:07.019 19:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 63466 00:17:07.369 19:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:07.628 19:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:07.888 19:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:07.888 19:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21a91abf-bc58-413f-9a91-f898ceba696d 00:17:08.147 19:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:08.147 19:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:08.147 19:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:08.406 [2024-10-17 19:19:17.635873] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:08.665 19:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21a91abf-bc58-413f-9a91-f898ceba696d 00:17:08.665 19:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:17:08.665 19:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21a91abf-bc58-413f-9a91-f898ceba696d 00:17:08.665 19:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:08.665 19:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:08.665 19:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:08.665 19:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:08.665 19:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:08.665 19:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:08.665 19:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:08.665 19:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:08.665 19:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21a91abf-bc58-413f-9a91-f898ceba696d 00:17:08.923 request: 00:17:08.923 { 00:17:08.923 "uuid": "21a91abf-bc58-413f-9a91-f898ceba696d", 00:17:08.923 "method": "bdev_lvol_get_lvstores", 00:17:08.923 "req_id": 1 00:17:08.923 } 00:17:08.923 Got JSON-RPC error response 00:17:08.923 response: 00:17:08.923 { 00:17:08.923 "code": -19, 00:17:08.923 "message": "No such device" 00:17:08.923 } 00:17:08.923 19:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:17:08.923 19:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:08.923 19:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:08.923 19:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:08.923 19:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:09.182 aio_bdev 00:17:09.182 19:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 60b68233-7510-45fb-9346-4e09bdfab3cd 00:17:09.182 19:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=60b68233-7510-45fb-9346-4e09bdfab3cd 00:17:09.182 19:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:09.182 19:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:17:09.182 19:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:09.182 19:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:09.182 19:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:09.439 19:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 60b68233-7510-45fb-9346-4e09bdfab3cd -t 2000 00:17:09.698 [ 00:17:09.698 { 00:17:09.698 "name": "60b68233-7510-45fb-9346-4e09bdfab3cd", 00:17:09.698 "aliases": [ 00:17:09.698 "lvs/lvol" 00:17:09.698 ], 00:17:09.698 "product_name": "Logical Volume", 00:17:09.698 "block_size": 4096, 00:17:09.698 "num_blocks": 38912, 00:17:09.698 "uuid": "60b68233-7510-45fb-9346-4e09bdfab3cd", 00:17:09.698 "assigned_rate_limits": { 00:17:09.698 "rw_ios_per_sec": 0, 00:17:09.698 "rw_mbytes_per_sec": 0, 00:17:09.698 "r_mbytes_per_sec": 0, 00:17:09.698 "w_mbytes_per_sec": 0 00:17:09.698 }, 00:17:09.698 "claimed": false, 00:17:09.698 "zoned": false, 00:17:09.698 "supported_io_types": { 00:17:09.698 "read": true, 00:17:09.698 "write": true, 00:17:09.698 "unmap": true, 00:17:09.698 "flush": false, 00:17:09.698 "reset": true, 00:17:09.698 "nvme_admin": false, 00:17:09.698 "nvme_io": false, 00:17:09.698 "nvme_io_md": false, 00:17:09.698 "write_zeroes": true, 00:17:09.698 "zcopy": false, 00:17:09.698 "get_zone_info": false, 00:17:09.698 "zone_management": false, 00:17:09.698 "zone_append": false, 00:17:09.698 "compare": false, 00:17:09.698 "compare_and_write": false, 00:17:09.698 "abort": false, 00:17:09.698 "seek_hole": true, 00:17:09.698 "seek_data": true, 00:17:09.698 "copy": false, 00:17:09.698 "nvme_iov_md": false 00:17:09.698 }, 00:17:09.698 "driver_specific": { 00:17:09.698 "lvol": { 00:17:09.698 "lvol_store_uuid": "21a91abf-bc58-413f-9a91-f898ceba696d", 00:17:09.698 "base_bdev": "aio_bdev", 00:17:09.698 "thin_provision": false, 00:17:09.698 "num_allocated_clusters": 38, 00:17:09.698 "snapshot": false, 00:17:09.698 "clone": false, 00:17:09.698 "esnap_clone": false 00:17:09.698 } 00:17:09.698 } 00:17:09.698 } 00:17:09.698 ] 00:17:09.698 19:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:17:09.698 19:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21a91abf-bc58-413f-9a91-f898ceba696d 00:17:09.698 19:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:09.956 19:19:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:09.956 19:19:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21a91abf-bc58-413f-9a91-f898ceba696d 00:17:09.956 19:19:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:10.220 19:19:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:10.220 19:19:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 60b68233-7510-45fb-9346-4e09bdfab3cd 00:17:10.490 19:19:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 21a91abf-bc58-413f-9a91-f898ceba696d 00:17:10.749 19:19:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:11.006 19:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:17:11.573 ************************************ 00:17:11.573 END TEST lvs_grow_clean 00:17:11.573 ************************************ 00:17:11.573 00:17:11.573 real 0m18.588s 00:17:11.573 user 0m17.309s 00:17:11.573 sys 0m2.678s 00:17:11.573 19:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:11.573 19:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:11.573 19:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:11.573 19:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:11.573 19:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:11.573 19:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:11.573 ************************************ 00:17:11.573 START TEST lvs_grow_dirty 00:17:11.573 ************************************ 00:17:11.573 19:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:17:11.573 19:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:11.573 19:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:11.573 19:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:11.573 19:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:11.574 19:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:11.574 19:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:11.574 19:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:17:11.574 19:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:17:11.574 19:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:11.832 19:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:11.832 19:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:12.090 19:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=003777e7-3637-4db6-84c6-1678a2c538fb 00:17:12.090 19:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 003777e7-3637-4db6-84c6-1678a2c538fb 00:17:12.090 19:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:12.349 19:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:12.349 19:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:12.349 19:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 003777e7-3637-4db6-84c6-1678a2c538fb lvol 150 00:17:12.607 19:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=5e7679b3-ead8-42b5-a6c9-162eb495c20c 00:17:12.607 19:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:17:12.607 19:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:12.866 [2024-10-17 19:19:21.959015] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:12.866 [2024-10-17 19:19:21.959119] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:12.866 true 00:17:12.866 19:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 003777e7-3637-4db6-84c6-1678a2c538fb 00:17:12.866 19:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:13.125 19:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:13.125 19:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:13.403 19:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5e7679b3-ead8-42b5-a6c9-162eb495c20c 00:17:13.662 19:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:13.920 [2024-10-17 19:19:23.047736] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:13.921 19:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:14.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:14.180 19:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63734 00:17:14.180 19:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:14.180 19:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:14.180 19:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63734 /var/tmp/bdevperf.sock 00:17:14.180 19:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 63734 ']' 00:17:14.180 19:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:14.180 19:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:14.180 19:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:14.180 19:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:14.180 19:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:14.180 [2024-10-17 19:19:23.389268] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:17:14.180 [2024-10-17 19:19:23.390268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63734 ] 00:17:14.439 [2024-10-17 19:19:23.534835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.439 [2024-10-17 19:19:23.617949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.439 [2024-10-17 19:19:23.693702] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:15.373 19:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:15.373 19:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:17:15.373 19:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:15.632 Nvme0n1 00:17:15.632 19:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:15.891 [ 00:17:15.891 { 00:17:15.891 "name": "Nvme0n1", 00:17:15.891 "aliases": [ 00:17:15.891 "5e7679b3-ead8-42b5-a6c9-162eb495c20c" 00:17:15.891 ], 00:17:15.891 "product_name": "NVMe disk", 00:17:15.891 "block_size": 4096, 00:17:15.891 "num_blocks": 38912, 00:17:15.891 "uuid": "5e7679b3-ead8-42b5-a6c9-162eb495c20c", 00:17:15.891 "numa_id": -1, 00:17:15.891 "assigned_rate_limits": { 00:17:15.891 "rw_ios_per_sec": 0, 00:17:15.891 "rw_mbytes_per_sec": 0, 00:17:15.891 "r_mbytes_per_sec": 0, 00:17:15.891 "w_mbytes_per_sec": 0 00:17:15.891 }, 00:17:15.891 "claimed": false, 00:17:15.891 "zoned": false, 00:17:15.891 "supported_io_types": { 00:17:15.891 "read": true, 00:17:15.891 "write": true, 00:17:15.891 "unmap": true, 00:17:15.891 "flush": true, 00:17:15.891 "reset": true, 00:17:15.891 "nvme_admin": true, 00:17:15.891 "nvme_io": true, 00:17:15.891 "nvme_io_md": false, 00:17:15.891 "write_zeroes": true, 00:17:15.891 "zcopy": false, 00:17:15.891 "get_zone_info": false, 00:17:15.891 "zone_management": false, 00:17:15.891 "zone_append": false, 00:17:15.891 "compare": true, 00:17:15.891 "compare_and_write": true, 00:17:15.891 "abort": true, 00:17:15.891 "seek_hole": false, 00:17:15.891 "seek_data": false, 00:17:15.891 "copy": true, 00:17:15.891 "nvme_iov_md": false 00:17:15.891 }, 00:17:15.891 "memory_domains": [ 00:17:15.891 { 00:17:15.891 "dma_device_id": "system", 00:17:15.891 "dma_device_type": 1 00:17:15.891 } 00:17:15.891 ], 00:17:15.891 "driver_specific": { 00:17:15.891 "nvme": [ 00:17:15.891 { 00:17:15.891 "trid": { 00:17:15.891 "trtype": "TCP", 00:17:15.891 "adrfam": "IPv4", 00:17:15.891 "traddr": "10.0.0.3", 00:17:15.891 "trsvcid": "4420", 00:17:15.891 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:15.891 }, 00:17:15.891 "ctrlr_data": { 00:17:15.891 "cntlid": 1, 00:17:15.891 "vendor_id": "0x8086", 00:17:15.891 "model_number": "SPDK bdev Controller", 00:17:15.891 "serial_number": "SPDK0", 00:17:15.891 "firmware_revision": "25.01", 00:17:15.891 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:15.891 "oacs": { 00:17:15.891 "security": 0, 00:17:15.891 "format": 0, 00:17:15.891 "firmware": 0, 00:17:15.891 "ns_manage": 0 00:17:15.891 }, 00:17:15.891 "multi_ctrlr": true, 00:17:15.891 "ana_reporting": false 00:17:15.891 }, 00:17:15.891 "vs": { 00:17:15.891 "nvme_version": "1.3" 00:17:15.891 }, 00:17:15.891 "ns_data": { 00:17:15.891 "id": 1, 00:17:15.891 "can_share": true 00:17:15.891 } 00:17:15.891 } 00:17:15.891 ], 00:17:15.891 "mp_policy": "active_passive" 00:17:15.891 } 00:17:15.891 } 00:17:15.891 ] 00:17:15.891 19:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63763 00:17:15.891 19:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:15.891 19:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:16.150 Running I/O for 10 seconds... 00:17:17.084 Latency(us) 00:17:17.084 [2024-10-17T19:19:26.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.084 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:17.085 Nvme0n1 : 1.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:17:17.085 [2024-10-17T19:19:26.343Z] =================================================================================================================== 00:17:17.085 [2024-10-17T19:19:26.343Z] Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:17:17.085 00:17:18.020 19:19:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 003777e7-3637-4db6-84c6-1678a2c538fb 00:17:18.020 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:18.020 Nvme0n1 : 2.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:17:18.020 [2024-10-17T19:19:27.278Z] =================================================================================================================== 00:17:18.020 [2024-10-17T19:19:27.278Z] Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:17:18.020 00:17:18.279 true 00:17:18.279 19:19:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:18.279 19:19:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 003777e7-3637-4db6-84c6-1678a2c538fb 00:17:18.536 19:19:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:18.537 19:19:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:18.537 19:19:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63763 00:17:19.102 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.102 Nvme0n1 : 3.00 7196.67 28.11 0.00 0.00 0.00 0.00 0.00 00:17:19.102 [2024-10-17T19:19:28.360Z] =================================================================================================================== 00:17:19.102 [2024-10-17T19:19:28.360Z] Total : 7196.67 28.11 0.00 0.00 0.00 0.00 0.00 00:17:19.102 00:17:20.035 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:20.035 Nvme0n1 : 4.00 7175.50 28.03 0.00 0.00 0.00 0.00 0.00 00:17:20.035 [2024-10-17T19:19:29.293Z] =================================================================================================================== 00:17:20.035 [2024-10-17T19:19:29.293Z] Total : 7175.50 28.03 0.00 0.00 0.00 0.00 0.00 00:17:20.035 00:17:20.987 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:20.987 Nvme0n1 : 5.00 7162.80 27.98 0.00 0.00 0.00 0.00 0.00 00:17:20.987 [2024-10-17T19:19:30.245Z] =================================================================================================================== 00:17:20.987 [2024-10-17T19:19:30.245Z] Total : 7162.80 27.98 0.00 0.00 0.00 0.00 0.00 00:17:20.987 00:17:21.922 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:21.922 Nvme0n1 : 6.00 7111.50 27.78 0.00 0.00 0.00 0.00 0.00 00:17:21.922 [2024-10-17T19:19:31.180Z] =================================================================================================================== 00:17:21.922 [2024-10-17T19:19:31.180Z] Total : 7111.50 27.78 0.00 0.00 0.00 0.00 0.00 00:17:21.922 00:17:23.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:23.296 Nvme0n1 : 7.00 7039.00 27.50 0.00 0.00 0.00 0.00 0.00 00:17:23.296 [2024-10-17T19:19:32.554Z] =================================================================================================================== 00:17:23.296 [2024-10-17T19:19:32.554Z] Total : 7039.00 27.50 0.00 0.00 0.00 0.00 0.00 00:17:23.296 00:17:24.232 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.232 Nvme0n1 : 8.00 7016.38 27.41 0.00 0.00 0.00 0.00 0.00 00:17:24.232 [2024-10-17T19:19:33.490Z] =================================================================================================================== 00:17:24.232 [2024-10-17T19:19:33.490Z] Total : 7016.38 27.41 0.00 0.00 0.00 0.00 0.00 00:17:24.232 00:17:25.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:25.166 Nvme0n1 : 9.00 6984.67 27.28 0.00 0.00 0.00 0.00 0.00 00:17:25.166 [2024-10-17T19:19:34.424Z] =================================================================================================================== 00:17:25.166 [2024-10-17T19:19:34.424Z] Total : 6984.67 27.28 0.00 0.00 0.00 0.00 0.00 00:17:25.166 00:17:26.165 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:26.166 Nvme0n1 : 10.00 6959.30 27.18 0.00 0.00 0.00 0.00 0.00 00:17:26.166 [2024-10-17T19:19:35.424Z] =================================================================================================================== 00:17:26.166 [2024-10-17T19:19:35.424Z] Total : 6959.30 27.18 0.00 0.00 0.00 0.00 0.00 00:17:26.166 00:17:26.166 00:17:26.166 Latency(us) 00:17:26.166 [2024-10-17T19:19:35.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:26.166 Nvme0n1 : 10.01 6963.78 27.20 0.00 0.00 18375.54 14537.08 63391.19 00:17:26.166 [2024-10-17T19:19:35.424Z] =================================================================================================================== 00:17:26.166 [2024-10-17T19:19:35.424Z] Total : 6963.78 27.20 0.00 0.00 18375.54 14537.08 63391.19 00:17:26.166 { 00:17:26.166 "results": [ 00:17:26.166 { 00:17:26.166 "job": "Nvme0n1", 00:17:26.166 "core_mask": "0x2", 00:17:26.166 "workload": "randwrite", 00:17:26.166 "status": "finished", 00:17:26.166 "queue_depth": 128, 00:17:26.166 "io_size": 4096, 00:17:26.166 "runtime": 10.011942, 00:17:26.166 "iops": 6963.783849327134, 00:17:26.166 "mibps": 27.202280661434116, 00:17:26.166 "io_failed": 0, 00:17:26.166 "io_timeout": 0, 00:17:26.166 "avg_latency_us": 18375.538369527374, 00:17:26.166 "min_latency_us": 14537.076363636364, 00:17:26.166 "max_latency_us": 63391.185454545455 00:17:26.166 } 00:17:26.166 ], 00:17:26.166 "core_count": 1 00:17:26.166 } 00:17:26.166 19:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63734 00:17:26.166 19:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 63734 ']' 00:17:26.166 19:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 63734 00:17:26.166 19:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:17:26.166 19:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:26.166 19:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63734 00:17:26.166 killing process with pid 63734 00:17:26.166 Received shutdown signal, test time was about 10.000000 seconds 00:17:26.166 00:17:26.166 Latency(us) 00:17:26.166 [2024-10-17T19:19:35.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.166 [2024-10-17T19:19:35.424Z] =================================================================================================================== 00:17:26.166 [2024-10-17T19:19:35.424Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:26.166 19:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:26.166 19:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:26.166 19:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63734' 00:17:26.166 19:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 63734 00:17:26.166 19:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 63734 00:17:26.425 19:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:26.683 19:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:26.941 19:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 003777e7-3637-4db6-84c6-1678a2c538fb 00:17:26.941 19:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:27.509 19:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:27.509 19:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:27.509 19:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63391 00:17:27.509 19:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63391 00:17:27.509 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63391 Killed "${NVMF_APP[@]}" "$@" 00:17:27.509 19:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:27.509 19:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:27.509 19:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:27.509 19:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:27.509 19:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:27.509 19:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=63901 00:17:27.509 19:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 63901 00:17:27.509 19:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:27.509 19:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 63901 ']' 00:17:27.509 19:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.509 19:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:27.509 19:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.509 19:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:27.509 19:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:27.509 [2024-10-17 19:19:36.594643] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:17:27.509 [2024-10-17 19:19:36.594765] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.509 [2024-10-17 19:19:36.739166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.767 [2024-10-17 19:19:36.816522] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.767 [2024-10-17 19:19:36.816631] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.767 [2024-10-17 19:19:36.816658] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.767 [2024-10-17 19:19:36.816677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.767 [2024-10-17 19:19:36.816697] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.767 [2024-10-17 19:19:36.817193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.767 [2024-10-17 19:19:36.875966] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:28.702 19:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:28.702 19:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:17:28.702 19:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:28.702 19:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:28.702 19:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:28.702 19:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.702 19:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:28.962 [2024-10-17 19:19:37.957778] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:28.962 [2024-10-17 19:19:37.958212] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:28.962 [2024-10-17 19:19:37.958441] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:28.962 19:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:28.962 19:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 5e7679b3-ead8-42b5-a6c9-162eb495c20c 00:17:28.962 19:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=5e7679b3-ead8-42b5-a6c9-162eb495c20c 00:17:28.962 19:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:28.962 19:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:17:28.962 19:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:28.962 19:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:28.962 19:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:29.221 19:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5e7679b3-ead8-42b5-a6c9-162eb495c20c -t 2000 00:17:29.480 [ 00:17:29.480 { 00:17:29.480 "name": "5e7679b3-ead8-42b5-a6c9-162eb495c20c", 00:17:29.480 "aliases": [ 00:17:29.480 "lvs/lvol" 00:17:29.480 ], 00:17:29.480 "product_name": "Logical Volume", 00:17:29.480 "block_size": 4096, 00:17:29.480 "num_blocks": 38912, 00:17:29.480 "uuid": "5e7679b3-ead8-42b5-a6c9-162eb495c20c", 00:17:29.480 "assigned_rate_limits": { 00:17:29.480 "rw_ios_per_sec": 0, 00:17:29.480 "rw_mbytes_per_sec": 0, 00:17:29.480 "r_mbytes_per_sec": 0, 00:17:29.480 "w_mbytes_per_sec": 0 00:17:29.480 }, 00:17:29.480 "claimed": false, 00:17:29.480 "zoned": false, 00:17:29.480 "supported_io_types": { 00:17:29.480 "read": true, 00:17:29.480 "write": true, 00:17:29.480 "unmap": true, 00:17:29.480 "flush": false, 00:17:29.480 "reset": true, 00:17:29.480 "nvme_admin": false, 00:17:29.480 "nvme_io": false, 00:17:29.480 "nvme_io_md": false, 00:17:29.480 "write_zeroes": true, 00:17:29.480 "zcopy": false, 00:17:29.480 "get_zone_info": false, 00:17:29.480 "zone_management": false, 00:17:29.480 "zone_append": false, 00:17:29.480 "compare": false, 00:17:29.480 "compare_and_write": false, 00:17:29.480 "abort": false, 00:17:29.480 "seek_hole": true, 00:17:29.480 "seek_data": true, 00:17:29.480 "copy": false, 00:17:29.480 "nvme_iov_md": false 00:17:29.480 }, 00:17:29.480 "driver_specific": { 00:17:29.480 "lvol": { 00:17:29.480 "lvol_store_uuid": "003777e7-3637-4db6-84c6-1678a2c538fb", 00:17:29.480 "base_bdev": "aio_bdev", 00:17:29.480 "thin_provision": false, 00:17:29.480 "num_allocated_clusters": 38, 00:17:29.480 "snapshot": false, 00:17:29.480 "clone": false, 00:17:29.480 "esnap_clone": false 00:17:29.480 } 00:17:29.480 } 00:17:29.480 } 00:17:29.480 ] 00:17:29.480 19:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:17:29.480 19:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 003777e7-3637-4db6-84c6-1678a2c538fb 00:17:29.480 19:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:29.739 19:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:29.739 19:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:29.739 19:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 003777e7-3637-4db6-84c6-1678a2c538fb 00:17:29.998 19:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:29.998 19:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:30.257 [2024-10-17 19:19:39.475257] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:30.516 19:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 003777e7-3637-4db6-84c6-1678a2c538fb 00:17:30.516 19:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:17:30.516 19:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 003777e7-3637-4db6-84c6-1678a2c538fb 00:17:30.516 19:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.516 19:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:30.516 19:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.516 19:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:30.516 19:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.516 19:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:30.516 19:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.516 19:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:30.516 19:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 003777e7-3637-4db6-84c6-1678a2c538fb 00:17:30.785 request: 00:17:30.785 { 00:17:30.785 "uuid": "003777e7-3637-4db6-84c6-1678a2c538fb", 00:17:30.785 "method": "bdev_lvol_get_lvstores", 00:17:30.785 "req_id": 1 00:17:30.785 } 00:17:30.785 Got JSON-RPC error response 00:17:30.785 response: 00:17:30.785 { 00:17:30.785 "code": -19, 00:17:30.785 "message": "No such device" 00:17:30.785 } 00:17:30.785 19:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:17:30.785 19:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:30.785 19:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:30.785 19:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:30.785 19:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:31.054 aio_bdev 00:17:31.054 19:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5e7679b3-ead8-42b5-a6c9-162eb495c20c 00:17:31.054 19:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=5e7679b3-ead8-42b5-a6c9-162eb495c20c 00:17:31.054 19:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:31.054 19:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:17:31.054 19:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:31.054 19:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:31.054 19:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:31.313 19:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5e7679b3-ead8-42b5-a6c9-162eb495c20c -t 2000 00:17:31.571 [ 00:17:31.571 { 00:17:31.571 "name": "5e7679b3-ead8-42b5-a6c9-162eb495c20c", 00:17:31.571 "aliases": [ 00:17:31.571 "lvs/lvol" 00:17:31.571 ], 00:17:31.571 "product_name": "Logical Volume", 00:17:31.571 "block_size": 4096, 00:17:31.571 "num_blocks": 38912, 00:17:31.571 "uuid": "5e7679b3-ead8-42b5-a6c9-162eb495c20c", 00:17:31.571 "assigned_rate_limits": { 00:17:31.571 "rw_ios_per_sec": 0, 00:17:31.571 "rw_mbytes_per_sec": 0, 00:17:31.571 "r_mbytes_per_sec": 0, 00:17:31.571 "w_mbytes_per_sec": 0 00:17:31.571 }, 00:17:31.571 "claimed": false, 00:17:31.571 "zoned": false, 00:17:31.571 "supported_io_types": { 00:17:31.571 "read": true, 00:17:31.571 "write": true, 00:17:31.571 "unmap": true, 00:17:31.571 "flush": false, 00:17:31.571 "reset": true, 00:17:31.571 "nvme_admin": false, 00:17:31.571 "nvme_io": false, 00:17:31.571 "nvme_io_md": false, 00:17:31.571 "write_zeroes": true, 00:17:31.571 "zcopy": false, 00:17:31.571 "get_zone_info": false, 00:17:31.571 "zone_management": false, 00:17:31.571 "zone_append": false, 00:17:31.571 "compare": false, 00:17:31.571 "compare_and_write": false, 00:17:31.571 "abort": false, 00:17:31.571 "seek_hole": true, 00:17:31.571 "seek_data": true, 00:17:31.571 "copy": false, 00:17:31.571 "nvme_iov_md": false 00:17:31.571 }, 00:17:31.571 "driver_specific": { 00:17:31.571 "lvol": { 00:17:31.571 "lvol_store_uuid": "003777e7-3637-4db6-84c6-1678a2c538fb", 00:17:31.571 "base_bdev": "aio_bdev", 00:17:31.571 "thin_provision": false, 00:17:31.571 "num_allocated_clusters": 38, 00:17:31.571 "snapshot": false, 00:17:31.571 "clone": false, 00:17:31.571 "esnap_clone": false 00:17:31.571 } 00:17:31.571 } 00:17:31.571 } 00:17:31.571 ] 00:17:31.571 19:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:17:31.571 19:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 003777e7-3637-4db6-84c6-1678a2c538fb 00:17:31.571 19:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:31.830 19:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:31.830 19:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:31.830 19:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 003777e7-3637-4db6-84c6-1678a2c538fb 00:17:32.088 19:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:32.088 19:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5e7679b3-ead8-42b5-a6c9-162eb495c20c 00:17:32.347 19:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 003777e7-3637-4db6-84c6-1678a2c538fb 00:17:32.606 19:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:32.867 19:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:17:33.465 ************************************ 00:17:33.465 END TEST lvs_grow_dirty 00:17:33.465 ************************************ 00:17:33.465 00:17:33.465 real 0m21.879s 00:17:33.465 user 0m44.612s 00:17:33.465 sys 0m8.051s 00:17:33.465 19:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:33.465 19:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:33.465 19:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:33.465 19:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:17:33.465 19:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:17:33.465 19:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:17:33.465 19:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:33.465 19:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:17:33.465 19:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:17:33.465 19:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:17:33.465 19:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:33.465 nvmf_trace.0 00:17:33.465 19:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:17:33.465 19:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:33.465 19:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:33.465 19:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:17:34.032 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:34.032 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:17:34.032 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:34.032 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:34.032 rmmod nvme_tcp 00:17:34.032 rmmod nvme_fabrics 00:17:34.032 rmmod nvme_keyring 00:17:34.032 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:34.032 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:17:34.032 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:17:34.032 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 63901 ']' 00:17:34.032 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 63901 00:17:34.032 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 63901 ']' 00:17:34.032 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 63901 00:17:34.032 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:17:34.032 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:34.032 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63901 00:17:34.032 killing process with pid 63901 00:17:34.032 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:34.032 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:34.032 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63901' 00:17:34.032 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 63901 00:17:34.032 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 63901 00:17:34.291 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:34.291 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:34.291 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:34.291 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:17:34.291 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:34.291 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:17:34.291 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:17:34.291 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:34.291 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:34.291 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:34.291 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:34.291 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:34.291 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:34.291 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:34.291 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:34.291 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:34.291 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:34.291 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:34.551 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:34.551 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:34.551 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:34.551 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:34.551 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:34.551 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.551 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.551 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.551 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:17:34.551 ************************************ 00:17:34.551 END TEST nvmf_lvs_grow 00:17:34.551 ************************************ 00:17:34.551 00:17:34.551 real 0m43.126s 00:17:34.551 user 1m9.313s 00:17:34.551 sys 0m11.844s 00:17:34.551 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:34.551 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:34.551 19:19:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:34.551 19:19:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:34.551 19:19:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:34.551 19:19:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:17:34.551 ************************************ 00:17:34.551 START TEST nvmf_bdev_io_wait 00:17:34.551 ************************************ 00:17:34.551 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:34.812 * Looking for test storage... 00:17:34.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:34.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.812 --rc genhtml_branch_coverage=1 00:17:34.812 --rc genhtml_function_coverage=1 00:17:34.812 --rc genhtml_legend=1 00:17:34.812 --rc geninfo_all_blocks=1 00:17:34.812 --rc geninfo_unexecuted_blocks=1 00:17:34.812 00:17:34.812 ' 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:34.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.812 --rc genhtml_branch_coverage=1 00:17:34.812 --rc genhtml_function_coverage=1 00:17:34.812 --rc genhtml_legend=1 00:17:34.812 --rc geninfo_all_blocks=1 00:17:34.812 --rc geninfo_unexecuted_blocks=1 00:17:34.812 00:17:34.812 ' 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:34.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.812 --rc genhtml_branch_coverage=1 00:17:34.812 --rc genhtml_function_coverage=1 00:17:34.812 --rc genhtml_legend=1 00:17:34.812 --rc geninfo_all_blocks=1 00:17:34.812 --rc geninfo_unexecuted_blocks=1 00:17:34.812 00:17:34.812 ' 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:34.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.812 --rc genhtml_branch_coverage=1 00:17:34.812 --rc genhtml_function_coverage=1 00:17:34.812 --rc genhtml_legend=1 00:17:34.812 --rc geninfo_all_blocks=1 00:17:34.812 --rc geninfo_unexecuted_blocks=1 00:17:34.812 00:17:34.812 ' 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:34.812 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:34.812 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # nvmf_veth_init 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:34.813 19:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:34.813 Cannot find device "nvmf_init_br" 00:17:34.813 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:17:34.813 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:34.813 Cannot find device "nvmf_init_br2" 00:17:34.813 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:17:34.813 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:34.813 Cannot find device "nvmf_tgt_br" 00:17:34.813 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:17:34.813 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:34.813 Cannot find device "nvmf_tgt_br2" 00:17:34.813 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:17:34.813 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:34.813 Cannot find device "nvmf_init_br" 00:17:34.813 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:17:34.813 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:34.813 Cannot find device "nvmf_init_br2" 00:17:34.813 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:17:34.813 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:35.071 Cannot find device "nvmf_tgt_br" 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:35.071 Cannot find device "nvmf_tgt_br2" 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:35.071 Cannot find device "nvmf_br" 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:35.071 Cannot find device "nvmf_init_if" 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:35.071 Cannot find device "nvmf_init_if2" 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:35.071 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:35.071 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:35.071 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:35.330 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:35.330 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.118 ms 00:17:35.330 00:17:35.330 --- 10.0.0.3 ping statistics --- 00:17:35.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.330 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:35.330 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:35.330 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:17:35.330 00:17:35.330 --- 10.0.0.4 ping statistics --- 00:17:35.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.330 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:35.330 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.330 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:17:35.330 00:17:35.330 --- 10.0.0.1 ping statistics --- 00:17:35.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.330 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:35.330 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.330 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:17:35.330 00:17:35.330 --- 10.0.0.2 ping statistics --- 00:17:35.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.330 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # return 0 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=64279 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 64279 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 64279 ']' 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:35.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:35.330 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:35.330 [2024-10-17 19:19:44.499568] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:17:35.330 [2024-10-17 19:19:44.499720] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.590 [2024-10-17 19:19:44.644307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:35.590 [2024-10-17 19:19:44.727366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.590 [2024-10-17 19:19:44.727645] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.590 [2024-10-17 19:19:44.727746] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.590 [2024-10-17 19:19:44.727838] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.590 [2024-10-17 19:19:44.727926] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.590 [2024-10-17 19:19:44.729529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.590 [2024-10-17 19:19:44.729683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.590 [2024-10-17 19:19:44.729819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:35.590 [2024-10-17 19:19:44.729893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.590 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:35.590 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:17:35.590 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:35.590 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:35.590 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:35.590 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.590 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:35.590 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.590 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:35.590 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.590 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:35.590 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.590 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:35.850 [2024-10-17 19:19:44.902250] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:35.850 [2024-10-17 19:19:44.916546] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:35.850 Malloc0 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:35.850 [2024-10-17 19:19:44.987504] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64312 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64314 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:17:35.850 { 00:17:35.850 "params": { 00:17:35.850 "name": "Nvme$subsystem", 00:17:35.850 "trtype": "$TEST_TRANSPORT", 00:17:35.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:35.850 "adrfam": "ipv4", 00:17:35.850 "trsvcid": "$NVMF_PORT", 00:17:35.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:35.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:35.850 "hdgst": ${hdgst:-false}, 00:17:35.850 "ddgst": ${ddgst:-false} 00:17:35.850 }, 00:17:35.850 "method": "bdev_nvme_attach_controller" 00:17:35.850 } 00:17:35.850 EOF 00:17:35.850 )") 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64316 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:17:35.850 { 00:17:35.850 "params": { 00:17:35.850 "name": "Nvme$subsystem", 00:17:35.850 "trtype": "$TEST_TRANSPORT", 00:17:35.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:35.850 "adrfam": "ipv4", 00:17:35.850 "trsvcid": "$NVMF_PORT", 00:17:35.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:35.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:35.850 "hdgst": ${hdgst:-false}, 00:17:35.850 "ddgst": ${ddgst:-false} 00:17:35.850 }, 00:17:35.850 "method": "bdev_nvme_attach_controller" 00:17:35.850 } 00:17:35.850 EOF 00:17:35.850 )") 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64318 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:35.850 19:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:17:35.850 19:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:17:35.850 19:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:17:35.850 19:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:17:35.850 { 00:17:35.850 "params": { 00:17:35.850 "name": "Nvme$subsystem", 00:17:35.850 "trtype": "$TEST_TRANSPORT", 00:17:35.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:35.850 "adrfam": "ipv4", 00:17:35.850 "trsvcid": "$NVMF_PORT", 00:17:35.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:35.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:35.850 "hdgst": ${hdgst:-false}, 00:17:35.850 "ddgst": ${ddgst:-false} 00:17:35.850 }, 00:17:35.850 "method": "bdev_nvme_attach_controller" 00:17:35.850 } 00:17:35.850 EOF 00:17:35.850 )") 00:17:35.850 19:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:17:35.850 19:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:17:35.850 19:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:17:35.850 19:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:17:35.850 19:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:17:35.850 "params": { 00:17:35.850 "name": "Nvme1", 00:17:35.850 "trtype": "tcp", 00:17:35.850 "traddr": "10.0.0.3", 00:17:35.850 "adrfam": "ipv4", 00:17:35.850 "trsvcid": "4420", 00:17:35.850 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.850 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:35.850 "hdgst": false, 00:17:35.850 "ddgst": false 00:17:35.850 }, 00:17:35.850 "method": "bdev_nvme_attach_controller" 00:17:35.850 }' 00:17:35.850 19:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:17:35.850 19:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:35.850 19:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:17:35.850 19:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:17:35.850 19:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:17:35.850 19:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:17:35.850 { 00:17:35.850 "params": { 00:17:35.850 "name": "Nvme$subsystem", 00:17:35.850 "trtype": "$TEST_TRANSPORT", 00:17:35.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:35.850 "adrfam": "ipv4", 00:17:35.850 "trsvcid": "$NVMF_PORT", 00:17:35.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:35.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:35.850 "hdgst": ${hdgst:-false}, 00:17:35.850 "ddgst": ${ddgst:-false} 00:17:35.851 }, 00:17:35.851 "method": "bdev_nvme_attach_controller" 00:17:35.851 } 00:17:35.851 EOF 00:17:35.851 )") 00:17:35.851 19:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:17:35.851 19:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:17:35.851 "params": { 00:17:35.851 "name": "Nvme1", 00:17:35.851 "trtype": "tcp", 00:17:35.851 "traddr": "10.0.0.3", 00:17:35.851 "adrfam": "ipv4", 00:17:35.851 "trsvcid": "4420", 00:17:35.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.851 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:35.851 "hdgst": false, 00:17:35.851 "ddgst": false 00:17:35.851 }, 00:17:35.851 "method": "bdev_nvme_attach_controller" 00:17:35.851 }' 00:17:35.851 19:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:17:35.851 19:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:17:35.851 19:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:17:35.851 "params": { 00:17:35.851 "name": "Nvme1", 00:17:35.851 "trtype": "tcp", 00:17:35.851 "traddr": "10.0.0.3", 00:17:35.851 "adrfam": "ipv4", 00:17:35.851 "trsvcid": "4420", 00:17:35.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.851 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:35.851 "hdgst": false, 00:17:35.851 "ddgst": false 00:17:35.851 }, 00:17:35.851 "method": "bdev_nvme_attach_controller" 00:17:35.851 }' 00:17:35.851 19:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:17:35.851 19:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:17:35.851 19:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:17:35.851 "params": { 00:17:35.851 "name": "Nvme1", 00:17:35.851 "trtype": "tcp", 00:17:35.851 "traddr": "10.0.0.3", 00:17:35.851 "adrfam": "ipv4", 00:17:35.851 "trsvcid": "4420", 00:17:35.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.851 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:35.851 "hdgst": false, 00:17:35.851 "ddgst": false 00:17:35.851 }, 00:17:35.851 "method": "bdev_nvme_attach_controller" 00:17:35.851 }' 00:17:35.851 [2024-10-17 19:19:45.052702] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:17:35.851 [2024-10-17 19:19:45.052809] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:35.851 [2024-10-17 19:19:45.065036] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:17:35.851 [2024-10-17 19:19:45.065119] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:35.851 19:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64312 00:17:35.851 [2024-10-17 19:19:45.082701] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:17:35.851 [2024-10-17 19:19:45.083114] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:35.851 [2024-10-17 19:19:45.083354] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:17:35.851 [2024-10-17 19:19:45.083417] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:36.109 [2024-10-17 19:19:45.267104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.109 [2024-10-17 19:19:45.333341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:36.109 [2024-10-17 19:19:45.341193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.109 [2024-10-17 19:19:45.347532] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:36.368 [2024-10-17 19:19:45.400018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:36.368 [2024-10-17 19:19:45.409299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.368 [2024-10-17 19:19:45.413818] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:36.368 [2024-10-17 19:19:45.467893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:36.368 [2024-10-17 19:19:45.481901] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:36.368 [2024-10-17 19:19:45.489564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.368 Running I/O for 1 seconds... 00:17:36.368 Running I/O for 1 seconds... 00:17:36.368 [2024-10-17 19:19:45.547224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:17:36.368 [2024-10-17 19:19:45.560903] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:36.627 Running I/O for 1 seconds... 00:17:36.627 Running I/O for 1 seconds... 00:17:37.562 4706.00 IOPS, 18.38 MiB/s 00:17:37.562 Latency(us) 00:17:37.562 [2024-10-17T19:19:46.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.562 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:37.562 Nvme1n1 : 1.03 4723.62 18.45 0.00 0.00 26790.44 5242.88 49807.36 00:17:37.562 [2024-10-17T19:19:46.820Z] =================================================================================================================== 00:17:37.562 [2024-10-17T19:19:46.820Z] Total : 4723.62 18.45 0.00 0.00 26790.44 5242.88 49807.36 00:17:37.562 5763.00 IOPS, 22.51 MiB/s 00:17:37.562 Latency(us) 00:17:37.562 [2024-10-17T19:19:46.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.562 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:37.562 Nvme1n1 : 1.02 5804.58 22.67 0.00 0.00 21882.78 11796.48 34317.03 00:17:37.562 [2024-10-17T19:19:46.820Z] =================================================================================================================== 00:17:37.562 [2024-10-17T19:19:46.820Z] Total : 5804.58 22.67 0.00 0.00 21882.78 11796.48 34317.03 00:17:37.562 168968.00 IOPS, 660.03 MiB/s 00:17:37.562 Latency(us) 00:17:37.562 [2024-10-17T19:19:46.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.562 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:37.562 Nvme1n1 : 1.00 168634.01 658.73 0.00 0.00 755.11 379.81 1980.97 00:17:37.562 [2024-10-17T19:19:46.820Z] =================================================================================================================== 00:17:37.562 [2024-10-17T19:19:46.820Z] Total : 168634.01 658.73 0.00 0.00 755.11 379.81 1980.97 00:17:37.562 19:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64314 00:17:37.562 5328.00 IOPS, 20.81 MiB/s 00:17:37.562 Latency(us) 00:17:37.562 [2024-10-17T19:19:46.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.562 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:37.562 Nvme1n1 : 1.01 5474.81 21.39 0.00 0.00 23300.28 5510.98 58624.93 00:17:37.562 [2024-10-17T19:19:46.820Z] =================================================================================================================== 00:17:37.562 [2024-10-17T19:19:46.820Z] Total : 5474.81 21.39 0.00 0.00 23300.28 5510.98 58624.93 00:17:37.562 19:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64316 00:17:37.562 19:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64318 00:17:37.822 19:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:37.822 19:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.822 19:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:37.822 19:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.822 19:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:37.822 19:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:37.822 19:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:37.822 19:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:17:37.822 19:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:37.822 19:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:17:37.822 19:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:37.822 19:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:37.822 rmmod nvme_tcp 00:17:37.822 rmmod nvme_fabrics 00:17:37.822 rmmod nvme_keyring 00:17:37.822 19:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:37.822 19:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:17:37.822 19:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:17:37.822 19:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 64279 ']' 00:17:37.822 19:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 64279 00:17:37.822 19:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 64279 ']' 00:17:37.822 19:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 64279 00:17:37.822 19:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:17:37.822 19:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:37.822 19:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64279 00:17:37.822 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:37.822 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:37.822 killing process with pid 64279 00:17:37.822 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64279' 00:17:37.822 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 64279 00:17:37.822 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 64279 00:17:38.080 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:38.080 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:38.080 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:38.080 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:17:38.080 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:17:38.080 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:38.080 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:17:38.080 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:38.080 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:38.080 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:38.080 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:38.080 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:38.080 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:38.080 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:38.080 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:38.081 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:38.081 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:38.340 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:38.340 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:38.340 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:38.340 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:38.340 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:38.340 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:38.340 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.340 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:38.340 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.340 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:17:38.340 00:17:38.340 real 0m3.772s 00:17:38.340 user 0m14.678s 00:17:38.340 sys 0m2.166s 00:17:38.340 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:38.340 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:38.340 ************************************ 00:17:38.340 END TEST nvmf_bdev_io_wait 00:17:38.340 ************************************ 00:17:38.340 19:19:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:38.340 19:19:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:38.340 19:19:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:38.340 19:19:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:17:38.340 ************************************ 00:17:38.340 START TEST nvmf_queue_depth 00:17:38.340 ************************************ 00:17:38.340 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:38.600 * Looking for test storage... 00:17:38.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:38.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.600 --rc genhtml_branch_coverage=1 00:17:38.600 --rc genhtml_function_coverage=1 00:17:38.600 --rc genhtml_legend=1 00:17:38.600 --rc geninfo_all_blocks=1 00:17:38.600 --rc geninfo_unexecuted_blocks=1 00:17:38.600 00:17:38.600 ' 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:38.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.600 --rc genhtml_branch_coverage=1 00:17:38.600 --rc genhtml_function_coverage=1 00:17:38.600 --rc genhtml_legend=1 00:17:38.600 --rc geninfo_all_blocks=1 00:17:38.600 --rc geninfo_unexecuted_blocks=1 00:17:38.600 00:17:38.600 ' 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:38.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.600 --rc genhtml_branch_coverage=1 00:17:38.600 --rc genhtml_function_coverage=1 00:17:38.600 --rc genhtml_legend=1 00:17:38.600 --rc geninfo_all_blocks=1 00:17:38.600 --rc geninfo_unexecuted_blocks=1 00:17:38.600 00:17:38.600 ' 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:38.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.600 --rc genhtml_branch_coverage=1 00:17:38.600 --rc genhtml_function_coverage=1 00:17:38.600 --rc genhtml_legend=1 00:17:38.600 --rc geninfo_all_blocks=1 00:17:38.600 --rc geninfo_unexecuted_blocks=1 00:17:38.600 00:17:38.600 ' 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:38.600 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:38.600 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # nvmf_veth_init 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:38.601 Cannot find device "nvmf_init_br" 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:38.601 Cannot find device "nvmf_init_br2" 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:38.601 Cannot find device "nvmf_tgt_br" 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:38.601 Cannot find device "nvmf_tgt_br2" 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:17:38.601 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:38.859 Cannot find device "nvmf_init_br" 00:17:38.859 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:17:38.859 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:38.859 Cannot find device "nvmf_init_br2" 00:17:38.859 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:17:38.859 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:38.859 Cannot find device "nvmf_tgt_br" 00:17:38.859 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:17:38.859 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:38.859 Cannot find device "nvmf_tgt_br2" 00:17:38.859 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:17:38.859 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:38.859 Cannot find device "nvmf_br" 00:17:38.859 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:17:38.859 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:38.859 Cannot find device "nvmf_init_if" 00:17:38.859 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:17:38.859 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:38.859 Cannot find device "nvmf_init_if2" 00:17:38.859 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:17:38.859 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:38.859 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:38.859 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:17:38.859 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:38.860 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:38.860 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:17:38.860 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:38.860 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:38.860 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:38.860 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:38.860 19:19:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:38.860 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:38.860 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:38.860 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:38.860 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:38.860 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:38.860 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:38.860 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:38.860 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:38.860 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:38.860 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:38.860 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:38.860 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:38.860 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:38.860 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:39.118 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:39.118 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:39.118 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:39.118 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:39.118 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:39.118 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:39.118 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:39.118 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:39.118 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:39.118 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:39.118 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:39.118 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:39.118 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:39.118 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:39.118 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:39.118 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.113 ms 00:17:39.118 00:17:39.118 --- 10.0.0.3 ping statistics --- 00:17:39.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.118 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:17:39.118 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:39.118 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:39.118 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.100 ms 00:17:39.118 00:17:39.118 --- 10.0.0.4 ping statistics --- 00:17:39.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.118 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:17:39.118 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:39.118 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:39.118 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:17:39.118 00:17:39.118 --- 10.0.0.1 ping statistics --- 00:17:39.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.118 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:17:39.118 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:39.118 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:39.118 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:17:39.118 00:17:39.118 --- 10.0.0.2 ping statistics --- 00:17:39.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.118 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:39.118 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:39.118 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # return 0 00:17:39.118 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:39.118 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:39.118 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:39.118 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:39.118 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:39.118 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:39.118 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:39.118 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:39.118 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:39.118 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:39.119 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:39.119 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=64578 00:17:39.119 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 64578 00:17:39.119 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:39.119 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 64578 ']' 00:17:39.119 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.119 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:39.119 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.119 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:39.119 19:19:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:39.119 [2024-10-17 19:19:48.306049] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:17:39.119 [2024-10-17 19:19:48.306203] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.377 [2024-10-17 19:19:48.455167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.377 [2024-10-17 19:19:48.548357] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.377 [2024-10-17 19:19:48.548430] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.377 [2024-10-17 19:19:48.548447] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.377 [2024-10-17 19:19:48.548460] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.377 [2024-10-17 19:19:48.548472] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.377 [2024-10-17 19:19:48.549006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.377 [2024-10-17 19:19:48.628774] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:40.312 [2024-10-17 19:19:49.412579] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:40.312 Malloc0 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:40.312 [2024-10-17 19:19:49.467190] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64610 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64610 /var/tmp/bdevperf.sock 00:17:40.312 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 64610 ']' 00:17:40.313 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:40.313 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:40.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:40.313 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:40.313 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:40.313 19:19:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:40.313 [2024-10-17 19:19:49.534963] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:17:40.313 [2024-10-17 19:19:49.535081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64610 ] 00:17:40.572 [2024-10-17 19:19:49.673841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.572 [2024-10-17 19:19:49.734536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.572 [2024-10-17 19:19:49.789040] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:41.508 19:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:41.508 19:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:17:41.508 19:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:41.508 19:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.508 19:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:41.508 NVMe0n1 00:17:41.508 19:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.508 19:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:41.508 Running I/O for 10 seconds... 00:17:43.480 6218.00 IOPS, 24.29 MiB/s [2024-10-17T19:19:54.115Z] 6933.00 IOPS, 27.08 MiB/s [2024-10-17T19:19:55.046Z] 7177.33 IOPS, 28.04 MiB/s [2024-10-17T19:19:55.983Z] 7319.50 IOPS, 28.59 MiB/s [2024-10-17T19:19:56.917Z] 7395.80 IOPS, 28.89 MiB/s [2024-10-17T19:19:57.857Z] 7505.67 IOPS, 29.32 MiB/s [2024-10-17T19:19:58.788Z] 7609.29 IOPS, 29.72 MiB/s [2024-10-17T19:20:00.160Z] 7632.25 IOPS, 29.81 MiB/s [2024-10-17T19:20:01.095Z] 7675.67 IOPS, 29.98 MiB/s [2024-10-17T19:20:01.095Z] 7698.70 IOPS, 30.07 MiB/s 00:17:51.837 Latency(us) 00:17:51.837 [2024-10-17T19:20:01.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.837 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:51.837 Verification LBA range: start 0x0 length 0x4000 00:17:51.837 NVMe0n1 : 10.08 7737.56 30.22 0.00 0.00 131747.49 17992.61 97708.22 00:17:51.837 [2024-10-17T19:20:01.095Z] =================================================================================================================== 00:17:51.837 [2024-10-17T19:20:01.095Z] Total : 7737.56 30.22 0.00 0.00 131747.49 17992.61 97708.22 00:17:51.837 { 00:17:51.837 "results": [ 00:17:51.837 { 00:17:51.837 "job": "NVMe0n1", 00:17:51.837 "core_mask": "0x1", 00:17:51.837 "workload": "verify", 00:17:51.837 "status": "finished", 00:17:51.837 "verify_range": { 00:17:51.837 "start": 0, 00:17:51.837 "length": 16384 00:17:51.837 }, 00:17:51.837 "queue_depth": 1024, 00:17:51.837 "io_size": 4096, 00:17:51.837 "runtime": 10.080305, 00:17:51.837 "iops": 7737.563496342621, 00:17:51.837 "mibps": 30.224857407588363, 00:17:51.837 "io_failed": 0, 00:17:51.837 "io_timeout": 0, 00:17:51.837 "avg_latency_us": 131747.49132533072, 00:17:51.837 "min_latency_us": 17992.61090909091, 00:17:51.837 "max_latency_us": 97708.21818181819 00:17:51.837 } 00:17:51.837 ], 00:17:51.837 "core_count": 1 00:17:51.837 } 00:17:51.837 19:20:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64610 00:17:51.837 19:20:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 64610 ']' 00:17:51.837 19:20:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 64610 00:17:51.837 19:20:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:17:51.837 19:20:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:51.837 19:20:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64610 00:17:51.837 killing process with pid 64610 00:17:51.837 Received shutdown signal, test time was about 10.000000 seconds 00:17:51.837 00:17:51.837 Latency(us) 00:17:51.837 [2024-10-17T19:20:01.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.837 [2024-10-17T19:20:01.095Z] =================================================================================================================== 00:17:51.838 [2024-10-17T19:20:01.096Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:51.838 19:20:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:51.838 19:20:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:51.838 19:20:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64610' 00:17:51.838 19:20:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 64610 00:17:51.838 19:20:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 64610 00:17:51.838 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:51.838 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:51.838 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:51.838 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:17:52.096 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:52.096 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:17:52.096 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:52.096 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:52.096 rmmod nvme_tcp 00:17:52.096 rmmod nvme_fabrics 00:17:52.096 rmmod nvme_keyring 00:17:52.096 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:52.096 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:17:52.096 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:17:52.096 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 64578 ']' 00:17:52.096 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 64578 00:17:52.096 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 64578 ']' 00:17:52.097 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 64578 00:17:52.097 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:17:52.097 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:52.097 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64578 00:17:52.097 killing process with pid 64578 00:17:52.097 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:52.097 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:52.097 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64578' 00:17:52.097 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 64578 00:17:52.097 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 64578 00:17:52.356 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:52.356 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:52.356 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:52.356 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:17:52.356 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:52.356 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:17:52.356 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:17:52.356 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:52.356 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:52.356 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:52.356 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:52.356 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:52.356 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:52.356 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:52.356 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:52.356 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:52.356 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:52.356 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:52.615 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:52.615 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:52.615 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:52.615 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:52.615 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:52.615 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.615 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.615 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.615 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:17:52.615 00:17:52.615 real 0m14.164s 00:17:52.615 user 0m23.699s 00:17:52.615 sys 0m2.601s 00:17:52.615 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:52.615 ************************************ 00:17:52.615 END TEST nvmf_queue_depth 00:17:52.615 ************************************ 00:17:52.615 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:52.615 19:20:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:52.615 19:20:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:52.615 19:20:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:52.615 19:20:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:17:52.615 ************************************ 00:17:52.615 START TEST nvmf_target_multipath 00:17:52.615 ************************************ 00:17:52.615 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:52.874 * Looking for test storage... 00:17:52.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:52.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.874 --rc genhtml_branch_coverage=1 00:17:52.874 --rc genhtml_function_coverage=1 00:17:52.874 --rc genhtml_legend=1 00:17:52.874 --rc geninfo_all_blocks=1 00:17:52.874 --rc geninfo_unexecuted_blocks=1 00:17:52.874 00:17:52.874 ' 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:52.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.874 --rc genhtml_branch_coverage=1 00:17:52.874 --rc genhtml_function_coverage=1 00:17:52.874 --rc genhtml_legend=1 00:17:52.874 --rc geninfo_all_blocks=1 00:17:52.874 --rc geninfo_unexecuted_blocks=1 00:17:52.874 00:17:52.874 ' 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:52.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.874 --rc genhtml_branch_coverage=1 00:17:52.874 --rc genhtml_function_coverage=1 00:17:52.874 --rc genhtml_legend=1 00:17:52.874 --rc geninfo_all_blocks=1 00:17:52.874 --rc geninfo_unexecuted_blocks=1 00:17:52.874 00:17:52.874 ' 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:52.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.874 --rc genhtml_branch_coverage=1 00:17:52.874 --rc genhtml_function_coverage=1 00:17:52.874 --rc genhtml_legend=1 00:17:52.874 --rc geninfo_all_blocks=1 00:17:52.874 --rc geninfo_unexecuted_blocks=1 00:17:52.874 00:17:52.874 ' 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.874 19:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.874 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:17:52.874 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:17:52.874 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.874 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.874 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:52.874 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.874 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:52.874 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:17:52.874 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.874 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.874 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.874 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.874 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.874 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.874 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:52.875 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # nvmf_veth_init 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:52.875 Cannot find device "nvmf_init_br" 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:52.875 Cannot find device "nvmf_init_br2" 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:52.875 Cannot find device "nvmf_tgt_br" 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:52.875 Cannot find device "nvmf_tgt_br2" 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:52.875 Cannot find device "nvmf_init_br" 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:52.875 Cannot find device "nvmf_init_br2" 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:52.875 Cannot find device "nvmf_tgt_br" 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:17:52.875 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:53.134 Cannot find device "nvmf_tgt_br2" 00:17:53.134 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:17:53.134 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:53.134 Cannot find device "nvmf_br" 00:17:53.134 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:17:53.134 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:53.134 Cannot find device "nvmf_init_if" 00:17:53.134 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:17:53.134 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:53.134 Cannot find device "nvmf_init_if2" 00:17:53.134 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:17:53.134 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:53.134 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:53.134 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:17:53.134 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:53.134 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:53.134 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:17:53.134 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:53.134 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:53.134 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:53.134 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:53.134 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:53.134 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:53.134 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:53.134 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:53.134 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:53.134 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:53.134 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:53.134 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:53.134 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:53.134 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:53.134 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:53.134 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:53.134 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:53.134 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:53.135 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:53.135 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:53.135 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:53.135 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:53.135 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:53.135 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:53.135 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:53.135 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:53.135 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:53.135 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:53.135 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:53.135 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:53.135 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:53.135 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:53.135 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:53.135 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:53.135 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:17:53.135 00:17:53.135 --- 10.0.0.3 ping statistics --- 00:17:53.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.135 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:17:53.394 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:53.394 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:53.394 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.075 ms 00:17:53.394 00:17:53.394 --- 10.0.0.4 ping statistics --- 00:17:53.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.394 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:17:53.394 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:53.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:53.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:17:53.394 00:17:53.394 --- 10.0.0.1 ping statistics --- 00:17:53.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.394 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:17:53.394 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:53.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:53.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:17:53.394 00:17:53.394 --- 10.0.0.2 ping statistics --- 00:17:53.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.394 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:17:53.394 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:53.394 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # return 0 00:17:53.394 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:53.394 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:53.394 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:53.394 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:53.394 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:53.394 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:53.394 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:53.394 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:17:53.394 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:17:53.394 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:17:53.394 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:53.394 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:53.394 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:53.394 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # nvmfpid=64989 00:17:53.394 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # waitforlisten 64989 00:17:53.394 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:53.394 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 64989 ']' 00:17:53.394 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.394 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:53.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.394 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.394 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:53.394 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:53.394 [2024-10-17 19:20:02.483856] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:17:53.394 [2024-10-17 19:20:02.483961] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.394 [2024-10-17 19:20:02.621760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:53.654 [2024-10-17 19:20:02.689050] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.654 [2024-10-17 19:20:02.689143] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.654 [2024-10-17 19:20:02.689159] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:53.654 [2024-10-17 19:20:02.689170] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:53.654 [2024-10-17 19:20:02.689180] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.654 [2024-10-17 19:20:02.690441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.654 [2024-10-17 19:20:02.690605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:53.654 [2024-10-17 19:20:02.690607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.654 [2024-10-17 19:20:02.690543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.654 [2024-10-17 19:20:02.752541] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:53.654 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:53.654 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:17:53.654 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:53.654 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:53.654 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:53.654 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.654 19:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:54.221 [2024-10-17 19:20:03.169631] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.221 19:20:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:54.479 Malloc0 00:17:54.479 19:20:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:17:54.737 19:20:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:54.995 19:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:55.254 [2024-10-17 19:20:04.281738] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:55.254 19:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:17:55.512 [2024-10-17 19:20:04.534144] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:17:55.512 19:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid=cb4c864e-bb30-4900-8fc1-989c4e76fc1b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:17:55.512 19:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid=cb4c864e-bb30-4900-8fc1-989c4e76fc1b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:17:55.771 19:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:17:55.771 19:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:17:55.771 19:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:55.771 19:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:55.771 19:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=65071 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:17:57.677 19:20:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:17:57.677 [global] 00:17:57.677 thread=1 00:17:57.677 invalidate=1 00:17:57.677 rw=randrw 00:17:57.677 time_based=1 00:17:57.677 runtime=6 00:17:57.677 ioengine=libaio 00:17:57.677 direct=1 00:17:57.677 bs=4096 00:17:57.677 iodepth=128 00:17:57.677 norandommap=0 00:17:57.677 numjobs=1 00:17:57.677 00:17:57.677 verify_dump=1 00:17:57.677 verify_backlog=512 00:17:57.677 verify_state_save=0 00:17:57.677 do_verify=1 00:17:57.677 verify=crc32c-intel 00:17:57.677 [job0] 00:17:57.677 filename=/dev/nvme0n1 00:17:57.677 Could not set queue depth (nvme0n1) 00:17:57.936 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:57.936 fio-3.35 00:17:57.936 Starting 1 thread 00:17:58.875 19:20:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:59.134 19:20:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:17:59.392 19:20:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:17:59.392 19:20:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:17:59.392 19:20:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:59.392 19:20:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:17:59.392 19:20:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:17:59.392 19:20:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:17:59.392 19:20:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:17:59.392 19:20:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:17:59.392 19:20:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:59.392 19:20:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:17:59.392 19:20:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:59.392 19:20:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:17:59.392 19:20:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:59.651 19:20:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:17:59.909 19:20:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:17:59.909 19:20:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:17:59.909 19:20:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:59.909 19:20:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:17:59.909 19:20:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:17:59.909 19:20:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:17:59.909 19:20:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:17:59.909 19:20:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:17:59.909 19:20:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:59.909 19:20:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:17:59.909 19:20:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:59.909 19:20:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:17:59.909 19:20:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 65071 00:18:04.155 00:18:04.155 job0: (groupid=0, jobs=1): err= 0: pid=65092: Thu Oct 17 19:20:13 2024 00:18:04.155 read: IOPS=9640, BW=37.7MiB/s (39.5MB/s)(226MiB/6003msec) 00:18:04.155 slat (usec): min=6, max=7719, avg=61.95, stdev=251.50 00:18:04.155 clat (usec): min=1672, max=17924, avg=9133.66, stdev=1717.04 00:18:04.155 lat (usec): min=1691, max=17936, avg=9195.61, stdev=1721.65 00:18:04.155 clat percentiles (usec): 00:18:04.155 | 1.00th=[ 4752], 5.00th=[ 6849], 10.00th=[ 7701], 20.00th=[ 8225], 00:18:04.155 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 9110], 00:18:04.155 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[11076], 95.00th=[12780], 00:18:04.155 | 99.00th=[15008], 99.50th=[15533], 99.90th=[16712], 99.95th=[17171], 00:18:04.155 | 99.99th=[17957] 00:18:04.155 bw ( KiB/s): min=11704, max=25280, per=50.57%, avg=19499.27, stdev=4752.60, samples=11 00:18:04.155 iops : min= 2926, max= 6320, avg=4874.82, stdev=1188.15, samples=11 00:18:04.155 write: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(115MiB/5273msec); 0 zone resets 00:18:04.155 slat (usec): min=15, max=2551, avg=70.89, stdev=177.88 00:18:04.155 clat (usec): min=1674, max=17537, avg=7917.56, stdev=1541.61 00:18:04.155 lat (usec): min=1799, max=17565, avg=7988.45, stdev=1547.64 00:18:04.155 clat percentiles (usec): 00:18:04.155 | 1.00th=[ 3654], 5.00th=[ 4555], 10.00th=[ 6063], 20.00th=[ 7242], 00:18:04.155 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 8029], 60.00th=[ 8225], 00:18:04.155 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[10028], 00:18:04.155 | 99.00th=[12518], 99.50th=[13435], 99.90th=[14877], 99.95th=[15401], 00:18:04.155 | 99.99th=[15926] 00:18:04.155 bw ( KiB/s): min=12288, max=24648, per=87.52%, avg=19539.09, stdev=4298.28, samples=11 00:18:04.155 iops : min= 3072, max= 6162, avg=4884.73, stdev=1074.58, samples=11 00:18:04.155 lat (msec) : 2=0.02%, 4=0.91%, 10=85.85%, 20=13.22% 00:18:04.155 cpu : usr=5.78%, sys=19.62%, ctx=5113, majf=0, minf=90 00:18:04.155 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:18:04.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.155 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:04.155 issued rwts: total=57869,29431,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:04.155 00:18:04.155 Run status group 0 (all jobs): 00:18:04.155 READ: bw=37.7MiB/s (39.5MB/s), 37.7MiB/s-37.7MiB/s (39.5MB/s-39.5MB/s), io=226MiB (237MB), run=6003-6003msec 00:18:04.155 WRITE: bw=21.8MiB/s (22.9MB/s), 21.8MiB/s-21.8MiB/s (22.9MB/s-22.9MB/s), io=115MiB (121MB), run=5273-5273msec 00:18:04.155 00:18:04.155 Disk stats (read/write): 00:18:04.155 nvme0n1: ios=56968/28919, merge=0/0, ticks=500225/215464, in_queue=715689, util=98.63% 00:18:04.155 19:20:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:18:04.414 19:20:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:18:04.673 19:20:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:18:04.673 19:20:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:18:04.673 19:20:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:04.673 19:20:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:18:04.673 19:20:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:18:04.673 19:20:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:18:04.673 19:20:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:18:04.673 19:20:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:18:04.673 19:20:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:04.673 19:20:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:18:04.674 19:20:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:04.674 19:20:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:18:04.674 19:20:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:18:04.674 19:20:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=65175 00:18:04.674 19:20:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:18:04.674 19:20:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:18:04.674 [global] 00:18:04.674 thread=1 00:18:04.674 invalidate=1 00:18:04.674 rw=randrw 00:18:04.674 time_based=1 00:18:04.674 runtime=6 00:18:04.674 ioengine=libaio 00:18:04.674 direct=1 00:18:04.674 bs=4096 00:18:04.674 iodepth=128 00:18:04.674 norandommap=0 00:18:04.674 numjobs=1 00:18:04.674 00:18:04.674 verify_dump=1 00:18:04.674 verify_backlog=512 00:18:04.674 verify_state_save=0 00:18:04.674 do_verify=1 00:18:04.674 verify=crc32c-intel 00:18:04.674 [job0] 00:18:04.674 filename=/dev/nvme0n1 00:18:04.674 Could not set queue depth (nvme0n1) 00:18:04.933 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:04.933 fio-3.35 00:18:04.933 Starting 1 thread 00:18:05.921 19:20:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:06.180 19:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:18:06.438 19:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:18:06.438 19:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:18:06.438 19:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:06.438 19:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:18:06.438 19:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:18:06.438 19:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:18:06.438 19:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:18:06.438 19:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:18:06.438 19:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:06.438 19:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:18:06.438 19:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:06.438 19:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:18:06.438 19:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:06.697 19:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:18:06.955 19:20:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:18:06.955 19:20:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:18:06.955 19:20:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:06.955 19:20:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:18:06.955 19:20:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:18:06.955 19:20:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:18:06.955 19:20:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:18:06.955 19:20:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:18:06.955 19:20:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:06.955 19:20:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:18:06.955 19:20:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:06.955 19:20:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:18:06.955 19:20:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 65175 00:18:11.145 00:18:11.145 job0: (groupid=0, jobs=1): err= 0: pid=65196: Thu Oct 17 19:20:20 2024 00:18:11.145 read: IOPS=9584, BW=37.4MiB/s (39.3MB/s)(225MiB/6008msec) 00:18:11.145 slat (usec): min=3, max=8016, avg=52.73, stdev=230.66 00:18:11.145 clat (usec): min=307, max=23708, avg=9184.68, stdev=2736.75 00:18:11.145 lat (usec): min=320, max=23720, avg=9237.41, stdev=2748.74 00:18:11.145 clat percentiles (usec): 00:18:11.145 | 1.00th=[ 1991], 5.00th=[ 4015], 10.00th=[ 5276], 20.00th=[ 7308], 00:18:11.145 | 30.00th=[ 8586], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[ 9896], 00:18:11.145 | 70.00th=[10159], 80.00th=[10552], 90.00th=[11863], 95.00th=[14091], 00:18:11.145 | 99.00th=[16057], 99.50th=[17171], 99.90th=[19792], 99.95th=[20317], 00:18:11.145 | 99.99th=[23462] 00:18:11.145 bw ( KiB/s): min= 3032, max=32032, per=51.88%, avg=19891.45, stdev=7261.49, samples=11 00:18:11.145 iops : min= 758, max= 8008, avg=4972.82, stdev=1815.37, samples=11 00:18:11.145 write: IOPS=5666, BW=22.1MiB/s (23.2MB/s)(118MiB/5330msec); 0 zone resets 00:18:11.145 slat (usec): min=13, max=1970, avg=61.32, stdev=162.28 00:18:11.145 clat (usec): min=1127, max=18884, avg=7740.02, stdev=2435.67 00:18:11.145 lat (usec): min=1166, max=18936, avg=7801.34, stdev=2448.98 00:18:11.145 clat percentiles (usec): 00:18:11.145 | 1.00th=[ 2180], 5.00th=[ 3326], 10.00th=[ 3982], 20.00th=[ 5080], 00:18:11.145 | 30.00th=[ 6980], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8848], 00:18:11.145 | 70.00th=[ 9110], 80.00th=[ 9503], 90.00th=[ 9896], 95.00th=[10683], 00:18:11.145 | 99.00th=[13173], 99.50th=[13960], 99.90th=[17171], 99.95th=[17695], 00:18:11.145 | 99.99th=[18220] 00:18:11.145 bw ( KiB/s): min= 3272, max=32768, per=88.00%, avg=19946.18, stdev=7271.04, samples=11 00:18:11.145 iops : min= 818, max= 8192, avg=4986.55, stdev=1817.76, samples=11 00:18:11.145 lat (usec) : 500=0.03%, 750=0.05%, 1000=0.05% 00:18:11.145 lat (msec) : 2=0.75%, 4=5.82%, 10=68.24%, 20=25.03%, 50=0.04% 00:18:11.145 cpu : usr=5.38%, sys=20.56%, ctx=5249, majf=0, minf=90 00:18:11.145 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:18:11.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:11.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:11.145 issued rwts: total=57584,30201,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:11.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:11.145 00:18:11.145 Run status group 0 (all jobs): 00:18:11.145 READ: bw=37.4MiB/s (39.3MB/s), 37.4MiB/s-37.4MiB/s (39.3MB/s-39.3MB/s), io=225MiB (236MB), run=6008-6008msec 00:18:11.145 WRITE: bw=22.1MiB/s (23.2MB/s), 22.1MiB/s-22.1MiB/s (23.2MB/s-23.2MB/s), io=118MiB (124MB), run=5330-5330msec 00:18:11.145 00:18:11.145 Disk stats (read/write): 00:18:11.145 nvme0n1: ios=56827/29735, merge=0/0, ticks=499735/215964, in_queue=715699, util=98.71% 00:18:11.145 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:11.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:11.145 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:11.145 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:18:11.145 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:11.145 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:11.145 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:11.145 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:11.145 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:18:11.145 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:11.409 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:18:11.409 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:18:11.409 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:18:11.409 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:18:11.409 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:11.409 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:18:11.409 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:11.409 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:18:11.409 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:11.409 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:11.409 rmmod nvme_tcp 00:18:11.409 rmmod nvme_fabrics 00:18:11.409 rmmod nvme_keyring 00:18:11.409 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:11.708 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:18:11.708 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:18:11.708 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n 64989 ']' 00:18:11.708 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # killprocess 64989 00:18:11.708 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 64989 ']' 00:18:11.709 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 64989 00:18:11.709 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:18:11.709 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:11.709 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64989 00:18:11.709 killing process with pid 64989 00:18:11.709 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:11.709 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:11.709 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64989' 00:18:11.709 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 64989 00:18:11.709 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 64989 00:18:11.977 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:11.977 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:11.977 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:11.977 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:18:11.977 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:18:11.977 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:11.977 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:18:11.977 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:11.977 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:11.977 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:11.977 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:11.977 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:11.977 19:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:11.977 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:11.977 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:11.977 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:11.977 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:11.977 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:11.977 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:11.977 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:11.977 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:11.977 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:11.977 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:11.977 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.977 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:11.977 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.977 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:18:11.977 00:18:11.977 real 0m19.397s 00:18:11.977 user 1m12.602s 00:18:11.977 sys 0m8.933s 00:18:11.977 ************************************ 00:18:11.977 END TEST nvmf_target_multipath 00:18:11.977 ************************************ 00:18:11.977 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:11.977 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:18:12.238 ************************************ 00:18:12.238 START TEST nvmf_zcopy 00:18:12.238 ************************************ 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:12.238 * Looking for test storage... 00:18:12.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:12.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.238 --rc genhtml_branch_coverage=1 00:18:12.238 --rc genhtml_function_coverage=1 00:18:12.238 --rc genhtml_legend=1 00:18:12.238 --rc geninfo_all_blocks=1 00:18:12.238 --rc geninfo_unexecuted_blocks=1 00:18:12.238 00:18:12.238 ' 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:12.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.238 --rc genhtml_branch_coverage=1 00:18:12.238 --rc genhtml_function_coverage=1 00:18:12.238 --rc genhtml_legend=1 00:18:12.238 --rc geninfo_all_blocks=1 00:18:12.238 --rc geninfo_unexecuted_blocks=1 00:18:12.238 00:18:12.238 ' 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:12.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.238 --rc genhtml_branch_coverage=1 00:18:12.238 --rc genhtml_function_coverage=1 00:18:12.238 --rc genhtml_legend=1 00:18:12.238 --rc geninfo_all_blocks=1 00:18:12.238 --rc geninfo_unexecuted_blocks=1 00:18:12.238 00:18:12.238 ' 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:12.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.238 --rc genhtml_branch_coverage=1 00:18:12.238 --rc genhtml_function_coverage=1 00:18:12.238 --rc genhtml_legend=1 00:18:12.238 --rc geninfo_all_blocks=1 00:18:12.238 --rc geninfo_unexecuted_blocks=1 00:18:12.238 00:18:12.238 ' 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:12.238 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:12.239 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # nvmf_veth_init 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:12.239 Cannot find device "nvmf_init_br" 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:18:12.239 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:12.498 Cannot find device "nvmf_init_br2" 00:18:12.498 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:18:12.498 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:12.498 Cannot find device "nvmf_tgt_br" 00:18:12.498 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:18:12.498 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:12.498 Cannot find device "nvmf_tgt_br2" 00:18:12.498 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:18:12.498 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:12.498 Cannot find device "nvmf_init_br" 00:18:12.498 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:18:12.498 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:12.498 Cannot find device "nvmf_init_br2" 00:18:12.498 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:18:12.498 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:12.498 Cannot find device "nvmf_tgt_br" 00:18:12.498 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:18:12.498 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:12.499 Cannot find device "nvmf_tgt_br2" 00:18:12.499 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:18:12.499 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:12.499 Cannot find device "nvmf_br" 00:18:12.499 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:18:12.499 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:12.499 Cannot find device "nvmf_init_if" 00:18:12.499 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:18:12.499 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:12.499 Cannot find device "nvmf_init_if2" 00:18:12.499 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:18:12.499 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:12.499 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:12.499 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:18:12.499 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:12.499 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:12.499 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:18:12.499 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:12.499 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:12.499 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:12.499 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:12.499 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:12.499 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:12.499 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:12.499 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:12.499 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:12.499 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:12.499 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:12.499 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:12.499 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:12.499 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:12.499 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:12.499 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:12.499 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:12.499 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:12.759 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:12.759 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:12.759 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:12.759 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:12.759 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:12.759 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:12.759 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:12.759 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:12.759 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:12.759 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:12.759 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:12.759 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:12.759 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:12.759 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:12.759 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:12.759 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:12.759 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:18:12.759 00:18:12.759 --- 10.0.0.3 ping statistics --- 00:18:12.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.759 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:18:12.759 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:12.759 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:12.759 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.078 ms 00:18:12.759 00:18:12.759 --- 10.0.0.4 ping statistics --- 00:18:12.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.759 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:18:12.759 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:12.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:12.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:18:12.759 00:18:12.759 --- 10.0.0.1 ping statistics --- 00:18:12.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.759 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:18:12.760 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:12.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:12.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:18:12.760 00:18:12.760 --- 10.0.0.2 ping statistics --- 00:18:12.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.760 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:18:12.760 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:12.760 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # return 0 00:18:12.760 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:12.760 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:12.760 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:12.760 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:12.760 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:12.760 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:12.760 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:12.760 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:12.760 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:12.760 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:12.760 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:12.760 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=65503 00:18:12.760 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 65503 00:18:12.760 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:12.760 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 65503 ']' 00:18:12.760 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.760 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:12.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.760 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.760 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:12.760 19:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:12.760 [2024-10-17 19:20:21.957556] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:18:12.760 [2024-10-17 19:20:21.957659] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.018 [2024-10-17 19:20:22.088013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.018 [2024-10-17 19:20:22.167729] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.018 [2024-10-17 19:20:22.167809] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.018 [2024-10-17 19:20:22.167821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.018 [2024-10-17 19:20:22.167830] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.018 [2024-10-17 19:20:22.167839] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.018 [2024-10-17 19:20:22.168324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.018 [2024-10-17 19:20:22.253741] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:13.956 [2024-10-17 19:20:23.063959] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:13.956 [2024-10-17 19:20:23.080192] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:13.956 malloc0 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:13.956 { 00:18:13.956 "params": { 00:18:13.956 "name": "Nvme$subsystem", 00:18:13.956 "trtype": "$TEST_TRANSPORT", 00:18:13.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:13.956 "adrfam": "ipv4", 00:18:13.956 "trsvcid": "$NVMF_PORT", 00:18:13.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:13.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:13.956 "hdgst": ${hdgst:-false}, 00:18:13.956 "ddgst": ${ddgst:-false} 00:18:13.956 }, 00:18:13.956 "method": "bdev_nvme_attach_controller" 00:18:13.956 } 00:18:13.956 EOF 00:18:13.956 )") 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:18:13.956 19:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:18:13.956 "params": { 00:18:13.956 "name": "Nvme1", 00:18:13.956 "trtype": "tcp", 00:18:13.956 "traddr": "10.0.0.3", 00:18:13.956 "adrfam": "ipv4", 00:18:13.956 "trsvcid": "4420", 00:18:13.956 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.956 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:13.956 "hdgst": false, 00:18:13.956 "ddgst": false 00:18:13.956 }, 00:18:13.956 "method": "bdev_nvme_attach_controller" 00:18:13.956 }' 00:18:13.956 [2024-10-17 19:20:23.171347] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:18:13.956 [2024-10-17 19:20:23.171433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65536 ] 00:18:14.215 [2024-10-17 19:20:23.311227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.215 [2024-10-17 19:20:23.384300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.215 [2024-10-17 19:20:23.450733] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:14.474 Running I/O for 10 seconds... 00:18:16.345 5214.00 IOPS, 40.73 MiB/s [2024-10-17T19:20:26.990Z] 5198.00 IOPS, 40.61 MiB/s [2024-10-17T19:20:27.926Z] 5244.33 IOPS, 40.97 MiB/s [2024-10-17T19:20:28.860Z] 5285.25 IOPS, 41.29 MiB/s [2024-10-17T19:20:29.794Z] 5306.60 IOPS, 41.46 MiB/s [2024-10-17T19:20:30.728Z] 5320.17 IOPS, 41.56 MiB/s [2024-10-17T19:20:31.663Z] 5330.71 IOPS, 41.65 MiB/s [2024-10-17T19:20:32.599Z] 5331.25 IOPS, 41.65 MiB/s [2024-10-17T19:20:33.976Z] 5334.00 IOPS, 41.67 MiB/s [2024-10-17T19:20:33.976Z] 5333.70 IOPS, 41.67 MiB/s 00:18:24.718 Latency(us) 00:18:24.718 [2024-10-17T19:20:33.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.718 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:24.718 Verification LBA range: start 0x0 length 0x1000 00:18:24.718 Nvme1n1 : 10.02 5336.53 41.69 0.00 0.00 23902.60 2368.23 36461.85 00:18:24.718 [2024-10-17T19:20:33.976Z] =================================================================================================================== 00:18:24.718 [2024-10-17T19:20:33.976Z] Total : 5336.53 41.69 0.00 0.00 23902.60 2368.23 36461.85 00:18:24.718 19:20:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65653 00:18:24.718 19:20:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:24.718 19:20:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:24.718 19:20:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:24.718 19:20:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:24.718 19:20:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:18:24.718 19:20:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:18:24.718 19:20:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:24.718 19:20:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:24.718 { 00:18:24.718 "params": { 00:18:24.718 "name": "Nvme$subsystem", 00:18:24.718 "trtype": "$TEST_TRANSPORT", 00:18:24.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:24.718 "adrfam": "ipv4", 00:18:24.718 "trsvcid": "$NVMF_PORT", 00:18:24.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:24.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:24.718 "hdgst": ${hdgst:-false}, 00:18:24.718 "ddgst": ${ddgst:-false} 00:18:24.718 }, 00:18:24.718 "method": "bdev_nvme_attach_controller" 00:18:24.718 } 00:18:24.718 EOF 00:18:24.718 )") 00:18:24.718 19:20:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:18:24.718 [2024-10-17 19:20:33.804065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.718 [2024-10-17 19:20:33.804119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.718 19:20:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:18:24.718 19:20:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:18:24.718 19:20:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:18:24.718 "params": { 00:18:24.718 "name": "Nvme1", 00:18:24.718 "trtype": "tcp", 00:18:24.718 "traddr": "10.0.0.3", 00:18:24.718 "adrfam": "ipv4", 00:18:24.718 "trsvcid": "4420", 00:18:24.718 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.718 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:24.718 "hdgst": false, 00:18:24.718 "ddgst": false 00:18:24.718 }, 00:18:24.718 "method": "bdev_nvme_attach_controller" 00:18:24.718 }' 00:18:24.718 [2024-10-17 19:20:33.812018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.718 [2024-10-17 19:20:33.812052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.718 [2024-10-17 19:20:33.824015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.718 [2024-10-17 19:20:33.824047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.718 [2024-10-17 19:20:33.836016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.718 [2024-10-17 19:20:33.836046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.718 [2024-10-17 19:20:33.848021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.718 [2024-10-17 19:20:33.848052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.718 [2024-10-17 19:20:33.858882] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:18:24.718 [2024-10-17 19:20:33.859756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65653 ] 00:18:24.718 [2024-10-17 19:20:33.860023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.718 [2024-10-17 19:20:33.860049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.718 [2024-10-17 19:20:33.872029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.718 [2024-10-17 19:20:33.872059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.718 [2024-10-17 19:20:33.884023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.718 [2024-10-17 19:20:33.884065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.718 [2024-10-17 19:20:33.896026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.718 [2024-10-17 19:20:33.896071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.718 [2024-10-17 19:20:33.908029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.718 [2024-10-17 19:20:33.908073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.718 [2024-10-17 19:20:33.920033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.718 [2024-10-17 19:20:33.920078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.718 [2024-10-17 19:20:33.932036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.718 [2024-10-17 19:20:33.932081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.718 [2024-10-17 19:20:33.944040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.718 [2024-10-17 19:20:33.944074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.718 [2024-10-17 19:20:33.956044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.718 [2024-10-17 19:20:33.956091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.719 [2024-10-17 19:20:33.968052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.719 [2024-10-17 19:20:33.968094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.978 [2024-10-17 19:20:33.980049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.978 [2024-10-17 19:20:33.980096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.978 [2024-10-17 19:20:33.992051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.978 [2024-10-17 19:20:33.992085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.978 [2024-10-17 19:20:33.996422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.978 [2024-10-17 19:20:34.000056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.978 [2024-10-17 19:20:34.000101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.978 [2024-10-17 19:20:34.012064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.978 [2024-10-17 19:20:34.012107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.978 [2024-10-17 19:20:34.024066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.978 [2024-10-17 19:20:34.024105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.978 [2024-10-17 19:20:34.036069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.978 [2024-10-17 19:20:34.036104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.978 [2024-10-17 19:20:34.044066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.978 [2024-10-17 19:20:34.044102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.978 [2024-10-17 19:20:34.052086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.978 [2024-10-17 19:20:34.052119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.978 [2024-10-17 19:20:34.060073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.978 [2024-10-17 19:20:34.060106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.978 [2024-10-17 19:20:34.060454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.978 [2024-10-17 19:20:34.068076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.978 [2024-10-17 19:20:34.068111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.978 [2024-10-17 19:20:34.076095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.978 [2024-10-17 19:20:34.076171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.978 [2024-10-17 19:20:34.084092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.978 [2024-10-17 19:20:34.084175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.978 [2024-10-17 19:20:34.092081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.978 [2024-10-17 19:20:34.092115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.978 [2024-10-17 19:20:34.104090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.978 [2024-10-17 19:20:34.104124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.978 [2024-10-17 19:20:34.112087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.978 [2024-10-17 19:20:34.112120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.978 [2024-10-17 19:20:34.124111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.978 [2024-10-17 19:20:34.124177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.978 [2024-10-17 19:20:34.124513] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:24.978 [2024-10-17 19:20:34.132110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.978 [2024-10-17 19:20:34.132166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.978 [2024-10-17 19:20:34.140103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.978 [2024-10-17 19:20:34.140151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.978 [2024-10-17 19:20:34.152105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.978 [2024-10-17 19:20:34.152170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.978 [2024-10-17 19:20:34.164144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.978 [2024-10-17 19:20:34.164190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.978 [2024-10-17 19:20:34.176165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.978 [2024-10-17 19:20:34.176235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.978 [2024-10-17 19:20:34.184147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.978 [2024-10-17 19:20:34.184200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.978 [2024-10-17 19:20:34.192158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.978 [2024-10-17 19:20:34.192208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.978 [2024-10-17 19:20:34.200154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.978 [2024-10-17 19:20:34.200203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.978 [2024-10-17 19:20:34.208162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.978 [2024-10-17 19:20:34.208199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.978 [2024-10-17 19:20:34.216175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.978 [2024-10-17 19:20:34.216212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.978 [2024-10-17 19:20:34.224170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.978 [2024-10-17 19:20:34.224227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.978 [2024-10-17 19:20:34.232196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.978 [2024-10-17 19:20:34.232240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.237 [2024-10-17 19:20:34.240185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.237 [2024-10-17 19:20:34.240232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.237 Running I/O for 5 seconds... 00:18:25.237 [2024-10-17 19:20:34.248210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.237 [2024-10-17 19:20:34.248245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.237 [2024-10-17 19:20:34.263319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.237 [2024-10-17 19:20:34.263363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.237 [2024-10-17 19:20:34.278982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.237 [2024-10-17 19:20:34.279051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.237 [2024-10-17 19:20:34.293445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.237 [2024-10-17 19:20:34.293487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.237 [2024-10-17 19:20:34.304352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.237 [2024-10-17 19:20:34.304393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.237 [2024-10-17 19:20:34.317249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.237 [2024-10-17 19:20:34.317298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.237 [2024-10-17 19:20:34.333646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.237 [2024-10-17 19:20:34.333705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.237 [2024-10-17 19:20:34.350344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.237 [2024-10-17 19:20:34.350386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.237 [2024-10-17 19:20:34.361438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.237 [2024-10-17 19:20:34.361480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.237 [2024-10-17 19:20:34.377184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.238 [2024-10-17 19:20:34.377225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.238 [2024-10-17 19:20:34.391355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.238 [2024-10-17 19:20:34.391397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.238 [2024-10-17 19:20:34.401835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.238 [2024-10-17 19:20:34.401890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.238 [2024-10-17 19:20:34.414584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.238 [2024-10-17 19:20:34.414638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.238 [2024-10-17 19:20:34.426398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.238 [2024-10-17 19:20:34.426442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.238 [2024-10-17 19:20:34.438417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.238 [2024-10-17 19:20:34.438458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.238 [2024-10-17 19:20:34.455225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.238 [2024-10-17 19:20:34.455267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.238 [2024-10-17 19:20:34.472719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.238 [2024-10-17 19:20:34.472771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.238 [2024-10-17 19:20:34.483124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.238 [2024-10-17 19:20:34.483180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.497 [2024-10-17 19:20:34.496354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.497 [2024-10-17 19:20:34.496394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.497 [2024-10-17 19:20:34.511076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.497 [2024-10-17 19:20:34.511119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.497 [2024-10-17 19:20:34.521321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.497 [2024-10-17 19:20:34.521367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.497 [2024-10-17 19:20:34.537065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.497 [2024-10-17 19:20:34.537123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.497 [2024-10-17 19:20:34.553411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.497 [2024-10-17 19:20:34.553453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.497 [2024-10-17 19:20:34.564074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.497 [2024-10-17 19:20:34.564115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.497 [2024-10-17 19:20:34.576716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.497 [2024-10-17 19:20:34.576758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.497 [2024-10-17 19:20:34.588357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.497 [2024-10-17 19:20:34.588397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.497 [2024-10-17 19:20:34.604702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.497 [2024-10-17 19:20:34.604755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.497 [2024-10-17 19:20:34.621165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.497 [2024-10-17 19:20:34.621204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.497 [2024-10-17 19:20:34.631848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.497 [2024-10-17 19:20:34.631889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.497 [2024-10-17 19:20:34.644818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.497 [2024-10-17 19:20:34.644861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.497 [2024-10-17 19:20:34.656654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.497 [2024-10-17 19:20:34.656694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.497 [2024-10-17 19:20:34.672818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.497 [2024-10-17 19:20:34.672869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.498 [2024-10-17 19:20:34.690167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.498 [2024-10-17 19:20:34.690206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.498 [2024-10-17 19:20:34.701351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.498 [2024-10-17 19:20:34.701391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.498 [2024-10-17 19:20:34.715944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.498 [2024-10-17 19:20:34.715984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.498 [2024-10-17 19:20:34.731695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.498 [2024-10-17 19:20:34.731735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.498 [2024-10-17 19:20:34.742349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.498 [2024-10-17 19:20:34.742402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.756 [2024-10-17 19:20:34.755096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.756 [2024-10-17 19:20:34.755153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.756 [2024-10-17 19:20:34.770897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.756 [2024-10-17 19:20:34.770949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.756 [2024-10-17 19:20:34.786252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.756 [2024-10-17 19:20:34.786294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.756 [2024-10-17 19:20:34.797594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.756 [2024-10-17 19:20:34.797637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.756 [2024-10-17 19:20:34.810194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.756 [2024-10-17 19:20:34.810234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.756 [2024-10-17 19:20:34.825767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.756 [2024-10-17 19:20:34.825809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.756 [2024-10-17 19:20:34.842247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.756 [2024-10-17 19:20:34.842288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.756 [2024-10-17 19:20:34.853369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.756 [2024-10-17 19:20:34.853412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.757 [2024-10-17 19:20:34.868462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.757 [2024-10-17 19:20:34.868503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.757 [2024-10-17 19:20:34.882833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.757 [2024-10-17 19:20:34.882875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.757 [2024-10-17 19:20:34.892888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.757 [2024-10-17 19:20:34.892929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.757 [2024-10-17 19:20:34.908919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.757 [2024-10-17 19:20:34.908959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.757 [2024-10-17 19:20:34.920153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.757 [2024-10-17 19:20:34.920192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.757 [2024-10-17 19:20:34.936672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.757 [2024-10-17 19:20:34.936713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.757 [2024-10-17 19:20:34.953410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.757 [2024-10-17 19:20:34.953453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.757 [2024-10-17 19:20:34.963727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.757 [2024-10-17 19:20:34.963769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.757 [2024-10-17 19:20:34.976071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.757 [2024-10-17 19:20:34.976113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.757 [2024-10-17 19:20:34.987765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.757 [2024-10-17 19:20:34.987807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.757 [2024-10-17 19:20:34.999668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.757 [2024-10-17 19:20:34.999712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.015 [2024-10-17 19:20:35.016526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.015 [2024-10-17 19:20:35.016569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.015 [2024-10-17 19:20:35.033712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.015 [2024-10-17 19:20:35.033765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.015 [2024-10-17 19:20:35.050528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.015 [2024-10-17 19:20:35.050568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.015 [2024-10-17 19:20:35.061283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.015 [2024-10-17 19:20:35.061324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.015 [2024-10-17 19:20:35.073838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.015 [2024-10-17 19:20:35.073880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.015 [2024-10-17 19:20:35.088743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.015 [2024-10-17 19:20:35.088785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.015 [2024-10-17 19:20:35.106809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.015 [2024-10-17 19:20:35.106848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.015 [2024-10-17 19:20:35.117914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.015 [2024-10-17 19:20:35.117959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.015 [2024-10-17 19:20:35.129866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.015 [2024-10-17 19:20:35.129917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.016 [2024-10-17 19:20:35.145109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.016 [2024-10-17 19:20:35.145177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.016 [2024-10-17 19:20:35.163945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.016 [2024-10-17 19:20:35.163990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.016 [2024-10-17 19:20:35.175091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.016 [2024-10-17 19:20:35.175143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.016 [2024-10-17 19:20:35.187143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.016 [2024-10-17 19:20:35.187198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.016 [2024-10-17 19:20:35.202044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.016 [2024-10-17 19:20:35.202107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.016 [2024-10-17 19:20:35.212652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.016 [2024-10-17 19:20:35.212707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.016 [2024-10-17 19:20:35.225226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.016 [2024-10-17 19:20:35.225268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.016 [2024-10-17 19:20:35.236884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.016 [2024-10-17 19:20:35.236926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.016 10591.00 IOPS, 82.74 MiB/s [2024-10-17T19:20:35.274Z] [2024-10-17 19:20:35.252976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.016 [2024-10-17 19:20:35.253030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.016 [2024-10-17 19:20:35.268934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.016 [2024-10-17 19:20:35.268975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.275 [2024-10-17 19:20:35.279293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.275 [2024-10-17 19:20:35.279334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.275 [2024-10-17 19:20:35.292682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.275 [2024-10-17 19:20:35.292734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.275 [2024-10-17 19:20:35.307783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.275 [2024-10-17 19:20:35.307825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.275 [2024-10-17 19:20:35.324252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.275 [2024-10-17 19:20:35.324294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.275 [2024-10-17 19:20:35.343501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.275 [2024-10-17 19:20:35.343559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.275 [2024-10-17 19:20:35.359403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.275 [2024-10-17 19:20:35.359444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.275 [2024-10-17 19:20:35.370723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.275 [2024-10-17 19:20:35.370775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.275 [2024-10-17 19:20:35.386785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.275 [2024-10-17 19:20:35.386824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.275 [2024-10-17 19:20:35.398162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.275 [2024-10-17 19:20:35.398215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.275 [2024-10-17 19:20:35.414846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.275 [2024-10-17 19:20:35.414887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.275 [2024-10-17 19:20:35.424996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.275 [2024-10-17 19:20:35.425039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.275 [2024-10-17 19:20:35.441049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.275 [2024-10-17 19:20:35.441106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.275 [2024-10-17 19:20:35.457252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.275 [2024-10-17 19:20:35.457290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.275 [2024-10-17 19:20:35.473260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.275 [2024-10-17 19:20:35.473303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.275 [2024-10-17 19:20:35.483587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.275 [2024-10-17 19:20:35.483638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.275 [2024-10-17 19:20:35.496388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.275 [2024-10-17 19:20:35.496430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.275 [2024-10-17 19:20:35.512663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.275 [2024-10-17 19:20:35.512705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.275 [2024-10-17 19:20:35.527206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.275 [2024-10-17 19:20:35.527247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.534 [2024-10-17 19:20:35.542804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.534 [2024-10-17 19:20:35.542856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.534 [2024-10-17 19:20:35.552658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.534 [2024-10-17 19:20:35.552696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.534 [2024-10-17 19:20:35.568365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.534 [2024-10-17 19:20:35.568406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.534 [2024-10-17 19:20:35.584344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.534 [2024-10-17 19:20:35.584392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.534 [2024-10-17 19:20:35.599450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.534 [2024-10-17 19:20:35.599491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.534 [2024-10-17 19:20:35.610436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.534 [2024-10-17 19:20:35.610478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.534 [2024-10-17 19:20:35.624382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.534 [2024-10-17 19:20:35.624424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.534 [2024-10-17 19:20:35.635872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.534 [2024-10-17 19:20:35.635913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.534 [2024-10-17 19:20:35.652438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.534 [2024-10-17 19:20:35.652480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.534 [2024-10-17 19:20:35.663396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.534 [2024-10-17 19:20:35.663437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.534 [2024-10-17 19:20:35.677761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.534 [2024-10-17 19:20:35.677802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.534 [2024-10-17 19:20:35.688974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.534 [2024-10-17 19:20:35.689014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.534 [2024-10-17 19:20:35.701405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.534 [2024-10-17 19:20:35.701444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.534 [2024-10-17 19:20:35.713578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.534 [2024-10-17 19:20:35.713618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.534 [2024-10-17 19:20:35.729084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.534 [2024-10-17 19:20:35.729155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.534 [2024-10-17 19:20:35.745778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.534 [2024-10-17 19:20:35.745817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.534 [2024-10-17 19:20:35.756581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.534 [2024-10-17 19:20:35.756624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.534 [2024-10-17 19:20:35.769475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.534 [2024-10-17 19:20:35.769517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.534 [2024-10-17 19:20:35.781416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.534 [2024-10-17 19:20:35.781457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.793 [2024-10-17 19:20:35.797454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.793 [2024-10-17 19:20:35.797494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.793 [2024-10-17 19:20:35.814513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.793 [2024-10-17 19:20:35.814552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.793 [2024-10-17 19:20:35.825428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.793 [2024-10-17 19:20:35.825468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.793 [2024-10-17 19:20:35.837695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.793 [2024-10-17 19:20:35.837736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.793 [2024-10-17 19:20:35.849235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.793 [2024-10-17 19:20:35.849275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.793 [2024-10-17 19:20:35.861279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.793 [2024-10-17 19:20:35.861320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.793 [2024-10-17 19:20:35.878249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.793 [2024-10-17 19:20:35.878290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.793 [2024-10-17 19:20:35.889478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.793 [2024-10-17 19:20:35.889520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.793 [2024-10-17 19:20:35.901508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.793 [2024-10-17 19:20:35.901570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.793 [2024-10-17 19:20:35.916098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.793 [2024-10-17 19:20:35.916179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.793 [2024-10-17 19:20:35.931022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.793 [2024-10-17 19:20:35.931097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.793 [2024-10-17 19:20:35.946981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.793 [2024-10-17 19:20:35.947024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.793 [2024-10-17 19:20:35.957481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.793 [2024-10-17 19:20:35.957528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.793 [2024-10-17 19:20:35.972743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.793 [2024-10-17 19:20:35.972786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.793 [2024-10-17 19:20:35.989420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.793 [2024-10-17 19:20:35.989463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.793 [2024-10-17 19:20:35.999607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.793 [2024-10-17 19:20:35.999662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.793 [2024-10-17 19:20:36.012367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.793 [2024-10-17 19:20:36.012409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.793 [2024-10-17 19:20:36.024364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.793 [2024-10-17 19:20:36.024404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.794 [2024-10-17 19:20:36.036335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.794 [2024-10-17 19:20:36.036376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.052 [2024-10-17 19:20:36.053305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.052 [2024-10-17 19:20:36.053345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.052 [2024-10-17 19:20:36.069636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.052 [2024-10-17 19:20:36.069675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.052 [2024-10-17 19:20:36.086502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.052 [2024-10-17 19:20:36.086541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.052 [2024-10-17 19:20:36.103051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.052 [2024-10-17 19:20:36.103094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.052 [2024-10-17 19:20:36.122196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.052 [2024-10-17 19:20:36.122235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.052 [2024-10-17 19:20:36.133792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.052 [2024-10-17 19:20:36.133833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.052 [2024-10-17 19:20:36.149523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.052 [2024-10-17 19:20:36.149566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.052 [2024-10-17 19:20:36.167411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.052 [2024-10-17 19:20:36.167453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.052 [2024-10-17 19:20:36.178580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.052 [2024-10-17 19:20:36.178622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.052 [2024-10-17 19:20:36.190369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.052 [2024-10-17 19:20:36.190419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.052 [2024-10-17 19:20:36.201942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.052 [2024-10-17 19:20:36.201994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.052 [2024-10-17 19:20:36.219396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.052 [2024-10-17 19:20:36.219437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.052 [2024-10-17 19:20:36.230906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.052 [2024-10-17 19:20:36.230949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.052 [2024-10-17 19:20:36.242576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.052 [2024-10-17 19:20:36.242627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.052 10627.50 IOPS, 83.03 MiB/s [2024-10-17T19:20:36.310Z] [2024-10-17 19:20:36.254089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.052 [2024-10-17 19:20:36.254151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.052 [2024-10-17 19:20:36.265947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.052 [2024-10-17 19:20:36.265988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.052 [2024-10-17 19:20:36.283061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.052 [2024-10-17 19:20:36.283113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.052 [2024-10-17 19:20:36.299428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.052 [2024-10-17 19:20:36.299469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.311 [2024-10-17 19:20:36.310364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.311 [2024-10-17 19:20:36.310406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.311 [2024-10-17 19:20:36.323202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.311 [2024-10-17 19:20:36.323244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.311 [2024-10-17 19:20:36.335438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.311 [2024-10-17 19:20:36.335479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.311 [2024-10-17 19:20:36.350770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.311 [2024-10-17 19:20:36.350810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.311 [2024-10-17 19:20:36.367244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.311 [2024-10-17 19:20:36.367286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.311 [2024-10-17 19:20:36.383975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.311 [2024-10-17 19:20:36.384017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.311 [2024-10-17 19:20:36.394586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.311 [2024-10-17 19:20:36.394625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.311 [2024-10-17 19:20:36.410440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.311 [2024-10-17 19:20:36.410482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.311 [2024-10-17 19:20:36.421623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.311 [2024-10-17 19:20:36.421665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.311 [2024-10-17 19:20:36.438384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.311 [2024-10-17 19:20:36.438425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.311 [2024-10-17 19:20:36.456150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.311 [2024-10-17 19:20:36.456200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.311 [2024-10-17 19:20:36.467032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.311 [2024-10-17 19:20:36.467093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.311 [2024-10-17 19:20:36.481496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.311 [2024-10-17 19:20:36.481536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.311 [2024-10-17 19:20:36.497181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.311 [2024-10-17 19:20:36.497222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.312 [2024-10-17 19:20:36.506686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.312 [2024-10-17 19:20:36.506727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.312 [2024-10-17 19:20:36.522851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.312 [2024-10-17 19:20:36.522904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.312 [2024-10-17 19:20:36.537781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.312 [2024-10-17 19:20:36.537822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.312 [2024-10-17 19:20:36.553624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.312 [2024-10-17 19:20:36.553671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.312 [2024-10-17 19:20:36.563703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.312 [2024-10-17 19:20:36.563744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.570 [2024-10-17 19:20:36.579329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.570 [2024-10-17 19:20:36.579374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.570 [2024-10-17 19:20:36.596119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.570 [2024-10-17 19:20:36.596173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.570 [2024-10-17 19:20:36.605898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.570 [2024-10-17 19:20:36.605939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.570 [2024-10-17 19:20:36.621657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.570 [2024-10-17 19:20:36.621701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.570 [2024-10-17 19:20:36.638696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.570 [2024-10-17 19:20:36.638738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.570 [2024-10-17 19:20:36.649232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.570 [2024-10-17 19:20:36.649274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.570 [2024-10-17 19:20:36.661895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.570 [2024-10-17 19:20:36.661935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.570 [2024-10-17 19:20:36.673901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.570 [2024-10-17 19:20:36.673941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.570 [2024-10-17 19:20:36.686178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.570 [2024-10-17 19:20:36.686217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.570 [2024-10-17 19:20:36.698160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.570 [2024-10-17 19:20:36.698201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.570 [2024-10-17 19:20:36.709860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.570 [2024-10-17 19:20:36.709901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.570 [2024-10-17 19:20:36.721525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.570 [2024-10-17 19:20:36.721577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.570 [2024-10-17 19:20:36.733348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.570 [2024-10-17 19:20:36.733389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.570 [2024-10-17 19:20:36.750263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.570 [2024-10-17 19:20:36.750303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.570 [2024-10-17 19:20:36.766050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.570 [2024-10-17 19:20:36.766103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.570 [2024-10-17 19:20:36.776719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.571 [2024-10-17 19:20:36.776761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.571 [2024-10-17 19:20:36.789180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.571 [2024-10-17 19:20:36.789221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.571 [2024-10-17 19:20:36.801159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.571 [2024-10-17 19:20:36.801200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.571 [2024-10-17 19:20:36.817010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.571 [2024-10-17 19:20:36.817052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.829 [2024-10-17 19:20:36.832786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.829 [2024-10-17 19:20:36.832847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.829 [2024-10-17 19:20:36.843714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.829 [2024-10-17 19:20:36.843769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.829 [2024-10-17 19:20:36.856936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.829 [2024-10-17 19:20:36.856996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.829 [2024-10-17 19:20:36.871881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.829 [2024-10-17 19:20:36.871936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.829 [2024-10-17 19:20:36.883099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.829 [2024-10-17 19:20:36.883169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.829 [2024-10-17 19:20:36.896049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.829 [2024-10-17 19:20:36.896121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.829 [2024-10-17 19:20:36.908431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.829 [2024-10-17 19:20:36.908489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.829 [2024-10-17 19:20:36.923697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.829 [2024-10-17 19:20:36.923765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.829 [2024-10-17 19:20:36.939893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.829 [2024-10-17 19:20:36.939957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.829 [2024-10-17 19:20:36.950369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.829 [2024-10-17 19:20:36.950415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.829 [2024-10-17 19:20:36.963002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.829 [2024-10-17 19:20:36.963053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.829 [2024-10-17 19:20:36.974827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.829 [2024-10-17 19:20:36.974888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.829 [2024-10-17 19:20:36.990956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.829 [2024-10-17 19:20:36.991015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.830 [2024-10-17 19:20:37.007475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.830 [2024-10-17 19:20:37.007728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.830 [2024-10-17 19:20:37.022818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.830 [2024-10-17 19:20:37.023090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.830 [2024-10-17 19:20:37.033147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.830 [2024-10-17 19:20:37.033202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.830 [2024-10-17 19:20:37.046117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.830 [2024-10-17 19:20:37.046189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.830 [2024-10-17 19:20:37.058048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.830 [2024-10-17 19:20:37.058090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.830 [2024-10-17 19:20:37.074387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.830 [2024-10-17 19:20:37.074429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.088 [2024-10-17 19:20:37.091291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.088 [2024-10-17 19:20:37.091332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.088 [2024-10-17 19:20:37.107829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.088 [2024-10-17 19:20:37.107872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.088 [2024-10-17 19:20:37.118434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.088 [2024-10-17 19:20:37.118478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.088 [2024-10-17 19:20:37.131534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.088 [2024-10-17 19:20:37.131573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.088 [2024-10-17 19:20:37.143733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.088 [2024-10-17 19:20:37.143777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.088 [2024-10-17 19:20:37.158105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.088 [2024-10-17 19:20:37.158158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.088 [2024-10-17 19:20:37.168473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.088 [2024-10-17 19:20:37.168647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.088 [2024-10-17 19:20:37.183659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.088 [2024-10-17 19:20:37.183828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.088 [2024-10-17 19:20:37.199448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.088 [2024-10-17 19:20:37.199490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.088 [2024-10-17 19:20:37.209773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.088 [2024-10-17 19:20:37.209815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.088 [2024-10-17 19:20:37.225407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.088 [2024-10-17 19:20:37.225449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.088 [2024-10-17 19:20:37.240795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.088 [2024-10-17 19:20:37.240847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.088 10612.67 IOPS, 82.91 MiB/s [2024-10-17T19:20:37.346Z] [2024-10-17 19:20:37.250444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.088 [2024-10-17 19:20:37.250616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.088 [2024-10-17 19:20:37.263699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.088 [2024-10-17 19:20:37.263741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.088 [2024-10-17 19:20:37.275774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.088 [2024-10-17 19:20:37.275817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.088 [2024-10-17 19:20:37.291717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.088 [2024-10-17 19:20:37.291784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.088 [2024-10-17 19:20:37.307839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.088 [2024-10-17 19:20:37.307892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.088 [2024-10-17 19:20:37.318599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.088 [2024-10-17 19:20:37.318642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.088 [2024-10-17 19:20:37.331287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.088 [2024-10-17 19:20:37.331336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.346 [2024-10-17 19:20:37.343413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.346 [2024-10-17 19:20:37.343457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.346 [2024-10-17 19:20:37.359304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.346 [2024-10-17 19:20:37.359373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.346 [2024-10-17 19:20:37.375124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.346 [2024-10-17 19:20:37.375203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.346 [2024-10-17 19:20:37.385521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.346 [2024-10-17 19:20:37.385839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.346 [2024-10-17 19:20:37.398966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.346 [2024-10-17 19:20:37.399020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.346 [2024-10-17 19:20:37.411160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.346 [2024-10-17 19:20:37.411222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.346 [2024-10-17 19:20:37.425778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.346 [2024-10-17 19:20:37.425828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.346 [2024-10-17 19:20:37.436078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.346 [2024-10-17 19:20:37.436121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.346 [2024-10-17 19:20:37.448882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.346 [2024-10-17 19:20:37.449069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.346 [2024-10-17 19:20:37.463783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.346 [2024-10-17 19:20:37.463951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.346 [2024-10-17 19:20:37.474686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.346 [2024-10-17 19:20:37.474842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.346 [2024-10-17 19:20:37.490771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.346 [2024-10-17 19:20:37.490927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.346 [2024-10-17 19:20:37.506209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.347 [2024-10-17 19:20:37.506365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.347 [2024-10-17 19:20:37.522927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.347 [2024-10-17 19:20:37.523104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.347 [2024-10-17 19:20:37.533731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.347 [2024-10-17 19:20:37.533888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.347 [2024-10-17 19:20:37.546328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.347 [2024-10-17 19:20:37.546484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.347 [2024-10-17 19:20:37.560492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.347 [2024-10-17 19:20:37.560651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.347 [2024-10-17 19:20:37.571467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.347 [2024-10-17 19:20:37.571641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.347 [2024-10-17 19:20:37.586377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.347 [2024-10-17 19:20:37.586533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.605 [2024-10-17 19:20:37.602931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.605 [2024-10-17 19:20:37.603093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.605 [2024-10-17 19:20:37.619113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.605 [2024-10-17 19:20:37.619300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.605 [2024-10-17 19:20:37.636371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.605 [2024-10-17 19:20:37.636531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.605 [2024-10-17 19:20:37.648107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.605 [2024-10-17 19:20:37.648276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.605 [2024-10-17 19:20:37.664580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.605 [2024-10-17 19:20:37.664739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.605 [2024-10-17 19:20:37.680437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.605 [2024-10-17 19:20:37.680605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.605 [2024-10-17 19:20:37.690947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.605 [2024-10-17 19:20:37.691125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.605 [2024-10-17 19:20:37.703984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.605 [2024-10-17 19:20:37.704153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.605 [2024-10-17 19:20:37.715897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.605 [2024-10-17 19:20:37.716055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.605 [2024-10-17 19:20:37.732097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.605 [2024-10-17 19:20:37.732285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.605 [2024-10-17 19:20:37.748315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.605 [2024-10-17 19:20:37.748492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.605 [2024-10-17 19:20:37.758854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.605 [2024-10-17 19:20:37.758898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.605 [2024-10-17 19:20:37.774917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.605 [2024-10-17 19:20:37.774962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.605 [2024-10-17 19:20:37.790366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.605 [2024-10-17 19:20:37.790409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.605 [2024-10-17 19:20:37.800202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.605 [2024-10-17 19:20:37.800244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.605 [2024-10-17 19:20:37.812985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.605 [2024-10-17 19:20:37.813029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.605 [2024-10-17 19:20:37.825267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.605 [2024-10-17 19:20:37.825307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.605 [2024-10-17 19:20:37.841060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.605 [2024-10-17 19:20:37.841240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.605 [2024-10-17 19:20:37.852817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.606 [2024-10-17 19:20:37.852986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.865 [2024-10-17 19:20:37.865036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.865 [2024-10-17 19:20:37.865081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.865 [2024-10-17 19:20:37.877036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.865 [2024-10-17 19:20:37.877080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.865 [2024-10-17 19:20:37.889346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.865 [2024-10-17 19:20:37.889399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.865 [2024-10-17 19:20:37.905746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.865 [2024-10-17 19:20:37.905789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.865 [2024-10-17 19:20:37.921501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.865 [2024-10-17 19:20:37.921543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.865 [2024-10-17 19:20:37.932033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.865 [2024-10-17 19:20:37.932217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.865 [2024-10-17 19:20:37.944529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.865 [2024-10-17 19:20:37.944570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.865 [2024-10-17 19:20:37.955837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.865 [2024-10-17 19:20:37.955996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.865 [2024-10-17 19:20:37.972753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.865 [2024-10-17 19:20:37.972796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.865 [2024-10-17 19:20:37.983798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.865 [2024-10-17 19:20:37.983842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.865 [2024-10-17 19:20:37.996861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.865 [2024-10-17 19:20:37.996904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.865 [2024-10-17 19:20:38.008589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.865 [2024-10-17 19:20:38.008633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.865 [2024-10-17 19:20:38.024617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.865 [2024-10-17 19:20:38.024660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.865 [2024-10-17 19:20:38.040640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.865 [2024-10-17 19:20:38.040682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.865 [2024-10-17 19:20:38.050664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.865 [2024-10-17 19:20:38.050831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.865 [2024-10-17 19:20:38.063562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.865 [2024-10-17 19:20:38.063603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.865 [2024-10-17 19:20:38.079100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.865 [2024-10-17 19:20:38.079156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.865 [2024-10-17 19:20:38.095174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.865 [2024-10-17 19:20:38.095218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.865 [2024-10-17 19:20:38.112238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.865 [2024-10-17 19:20:38.112284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.125 [2024-10-17 19:20:38.128862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.125 [2024-10-17 19:20:38.128903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.125 [2024-10-17 19:20:38.138485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.125 [2024-10-17 19:20:38.138528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.125 [2024-10-17 19:20:38.154590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.125 [2024-10-17 19:20:38.154633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.125 [2024-10-17 19:20:38.170449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.125 [2024-10-17 19:20:38.170492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.125 [2024-10-17 19:20:38.181072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.125 [2024-10-17 19:20:38.181115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.125 [2024-10-17 19:20:38.194120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.125 [2024-10-17 19:20:38.194178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.125 [2024-10-17 19:20:38.210477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.125 [2024-10-17 19:20:38.210520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.125 [2024-10-17 19:20:38.224709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.125 [2024-10-17 19:20:38.224751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.125 [2024-10-17 19:20:38.234441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.125 [2024-10-17 19:20:38.234482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.125 10636.75 IOPS, 83.10 MiB/s [2024-10-17T19:20:38.383Z] [2024-10-17 19:20:38.249828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.125 [2024-10-17 19:20:38.250003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.125 [2024-10-17 19:20:38.265086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.125 [2024-10-17 19:20:38.265269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.125 [2024-10-17 19:20:38.275997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.125 [2024-10-17 19:20:38.276039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.125 [2024-10-17 19:20:38.288675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.125 [2024-10-17 19:20:38.288845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.125 [2024-10-17 19:20:38.303019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.125 [2024-10-17 19:20:38.303071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.125 [2024-10-17 19:20:38.319032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.125 [2024-10-17 19:20:38.319098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.125 [2024-10-17 19:20:38.335656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.125 [2024-10-17 19:20:38.335851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.125 [2024-10-17 19:20:38.352453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.125 [2024-10-17 19:20:38.352497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.125 [2024-10-17 19:20:38.370217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.125 [2024-10-17 19:20:38.370259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.392 [2024-10-17 19:20:38.380975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.392 [2024-10-17 19:20:38.381017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.392 [2024-10-17 19:20:38.395730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.392 [2024-10-17 19:20:38.395774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.392 [2024-10-17 19:20:38.406455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.392 [2024-10-17 19:20:38.406500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.392 [2024-10-17 19:20:38.418417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.392 [2024-10-17 19:20:38.418461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.392 [2024-10-17 19:20:38.434428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.392 [2024-10-17 19:20:38.434470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.392 [2024-10-17 19:20:38.451032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.392 [2024-10-17 19:20:38.451086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.392 [2024-10-17 19:20:38.467630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.392 [2024-10-17 19:20:38.467673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.392 [2024-10-17 19:20:38.484287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.392 [2024-10-17 19:20:38.484330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.392 [2024-10-17 19:20:38.500234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.392 [2024-10-17 19:20:38.500278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.393 [2024-10-17 19:20:38.510487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.393 [2024-10-17 19:20:38.510535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.393 [2024-10-17 19:20:38.527041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.393 [2024-10-17 19:20:38.527109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.393 [2024-10-17 19:20:38.542355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.393 [2024-10-17 19:20:38.542396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.393 [2024-10-17 19:20:38.552530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.393 [2024-10-17 19:20:38.552581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.393 [2024-10-17 19:20:38.568705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.393 [2024-10-17 19:20:38.568748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.393 [2024-10-17 19:20:38.584284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.393 [2024-10-17 19:20:38.584326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.393 [2024-10-17 19:20:38.594899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.393 [2024-10-17 19:20:38.594943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.393 [2024-10-17 19:20:38.608097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.393 [2024-10-17 19:20:38.608155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.393 [2024-10-17 19:20:38.619918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.393 [2024-10-17 19:20:38.619961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.393 [2024-10-17 19:20:38.631914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.393 [2024-10-17 19:20:38.631957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.656 [2024-10-17 19:20:38.648158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.656 [2024-10-17 19:20:38.648200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.656 [2024-10-17 19:20:38.663598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.656 [2024-10-17 19:20:38.663639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.656 [2024-10-17 19:20:38.674329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.656 [2024-10-17 19:20:38.674371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.656 [2024-10-17 19:20:38.687240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.656 [2024-10-17 19:20:38.687283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.656 [2024-10-17 19:20:38.699303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.656 [2024-10-17 19:20:38.699345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.656 [2024-10-17 19:20:38.715099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.656 [2024-10-17 19:20:38.715154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.656 [2024-10-17 19:20:38.731544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.656 [2024-10-17 19:20:38.731585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.656 [2024-10-17 19:20:38.741828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.656 [2024-10-17 19:20:38.742003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.656 [2024-10-17 19:20:38.754970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.656 [2024-10-17 19:20:38.755013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.656 [2024-10-17 19:20:38.766727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.656 [2024-10-17 19:20:38.766772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.656 [2024-10-17 19:20:38.782809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.656 [2024-10-17 19:20:38.782855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.656 [2024-10-17 19:20:38.799541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.656 [2024-10-17 19:20:38.799583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.656 [2024-10-17 19:20:38.810255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.656 [2024-10-17 19:20:38.810304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.656 [2024-10-17 19:20:38.822829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.656 [2024-10-17 19:20:38.822871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.656 [2024-10-17 19:20:38.837690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.656 [2024-10-17 19:20:38.837733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.656 [2024-10-17 19:20:38.855374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.656 [2024-10-17 19:20:38.855417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.656 [2024-10-17 19:20:38.865756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.656 [2024-10-17 19:20:38.865799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.656 [2024-10-17 19:20:38.880894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.656 [2024-10-17 19:20:38.881067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.656 [2024-10-17 19:20:38.897534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.657 [2024-10-17 19:20:38.897579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.657 [2024-10-17 19:20:38.908015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.657 [2024-10-17 19:20:38.908056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.916 [2024-10-17 19:20:38.920239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.916 [2024-10-17 19:20:38.920280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.916 [2024-10-17 19:20:38.931798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.916 [2024-10-17 19:20:38.931841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.916 [2024-10-17 19:20:38.947721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.916 [2024-10-17 19:20:38.947764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.916 [2024-10-17 19:20:38.964029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.916 [2024-10-17 19:20:38.964073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.916 [2024-10-17 19:20:38.973813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.916 [2024-10-17 19:20:38.973855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.916 [2024-10-17 19:20:38.989645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.916 [2024-10-17 19:20:38.989683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.916 [2024-10-17 19:20:39.004011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.916 [2024-10-17 19:20:39.004054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.916 [2024-10-17 19:20:39.020250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.916 [2024-10-17 19:20:39.020290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.916 [2024-10-17 19:20:39.030553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.916 [2024-10-17 19:20:39.030610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.916 [2024-10-17 19:20:39.046478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.916 [2024-10-17 19:20:39.046518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.916 [2024-10-17 19:20:39.062113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.916 [2024-10-17 19:20:39.062167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.916 [2024-10-17 19:20:39.073026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.916 [2024-10-17 19:20:39.073067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.916 [2024-10-17 19:20:39.085827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.916 [2024-10-17 19:20:39.085868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.916 [2024-10-17 19:20:39.100685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.916 [2024-10-17 19:20:39.100728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.916 [2024-10-17 19:20:39.111609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.916 [2024-10-17 19:20:39.111779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.916 [2024-10-17 19:20:39.126607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.916 [2024-10-17 19:20:39.126777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.916 [2024-10-17 19:20:39.137486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.916 [2024-10-17 19:20:39.137643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.916 [2024-10-17 19:20:39.153711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.916 [2024-10-17 19:20:39.153755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.916 [2024-10-17 19:20:39.167904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.916 [2024-10-17 19:20:39.167945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.175 [2024-10-17 19:20:39.178040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.175 [2024-10-17 19:20:39.178098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.175 [2024-10-17 19:20:39.191682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.175 [2024-10-17 19:20:39.191725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.175 [2024-10-17 19:20:39.203268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.175 [2024-10-17 19:20:39.203309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.175 [2024-10-17 19:20:39.219727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.175 [2024-10-17 19:20:39.219772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.175 [2024-10-17 19:20:39.236529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.175 [2024-10-17 19:20:39.236709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.175 10642.20 IOPS, 83.14 MiB/s [2024-10-17T19:20:39.433Z] [2024-10-17 19:20:39.251348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.175 [2024-10-17 19:20:39.251524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.175 00:18:30.175 Latency(us) 00:18:30.175 [2024-10-17T19:20:39.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.175 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:30.175 Nvme1n1 : 5.01 10642.32 83.14 0.00 0.00 12010.55 4885.41 22163.08 00:18:30.175 [2024-10-17T19:20:39.433Z] =================================================================================================================== 00:18:30.175 [2024-10-17T19:20:39.433Z] Total : 10642.32 83.14 0.00 0.00 12010.55 4885.41 22163.08 00:18:30.175 [2024-10-17 19:20:39.261320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.175 [2024-10-17 19:20:39.261479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.175 [2024-10-17 19:20:39.269312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.175 [2024-10-17 19:20:39.269467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.175 [2024-10-17 19:20:39.277326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.175 [2024-10-17 19:20:39.277484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.175 [2024-10-17 19:20:39.285326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.175 [2024-10-17 19:20:39.285469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.175 [2024-10-17 19:20:39.293326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.175 [2024-10-17 19:20:39.293470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.175 [2024-10-17 19:20:39.305335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.175 [2024-10-17 19:20:39.305477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.175 [2024-10-17 19:20:39.317333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.175 [2024-10-17 19:20:39.317475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.175 [2024-10-17 19:20:39.329339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.175 [2024-10-17 19:20:39.329489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.175 [2024-10-17 19:20:39.341348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.175 [2024-10-17 19:20:39.341502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.176 [2024-10-17 19:20:39.353344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.176 [2024-10-17 19:20:39.353485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.176 [2024-10-17 19:20:39.361349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.176 [2024-10-17 19:20:39.361499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.176 [2024-10-17 19:20:39.369350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.176 [2024-10-17 19:20:39.369490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.176 [2024-10-17 19:20:39.377352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.176 [2024-10-17 19:20:39.377504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.176 [2024-10-17 19:20:39.385356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.176 [2024-10-17 19:20:39.385503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.176 [2024-10-17 19:20:39.397361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.176 [2024-10-17 19:20:39.397502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.176 [2024-10-17 19:20:39.405361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.176 [2024-10-17 19:20:39.405502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.176 [2024-10-17 19:20:39.413363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.176 [2024-10-17 19:20:39.413521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.176 [2024-10-17 19:20:39.425367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.176 [2024-10-17 19:20:39.425507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.444 [2024-10-17 19:20:39.433369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.444 [2024-10-17 19:20:39.433508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.444 [2024-10-17 19:20:39.441373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.444 [2024-10-17 19:20:39.441511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.444 [2024-10-17 19:20:39.453380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.444 [2024-10-17 19:20:39.453520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.444 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65653) - No such process 00:18:30.444 19:20:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65653 00:18:30.444 19:20:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:30.444 19:20:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.444 19:20:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:30.444 19:20:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.444 19:20:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:30.444 19:20:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.444 19:20:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:30.444 delay0 00:18:30.444 19:20:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.444 19:20:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:30.444 19:20:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.444 19:20:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:30.444 19:20:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.444 19:20:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:18:30.444 [2024-10-17 19:20:39.655473] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:37.050 Initializing NVMe Controllers 00:18:37.050 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:37.050 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:37.050 Initialization complete. Launching workers. 00:18:37.050 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 100 00:18:37.050 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 387, failed to submit 33 00:18:37.050 success 252, unsuccessful 135, failed 0 00:18:37.050 19:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:37.050 19:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:37.050 19:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:37.050 19:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:18:37.050 19:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:37.050 19:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:18:37.050 19:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:37.050 19:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:37.050 rmmod nvme_tcp 00:18:37.050 rmmod nvme_fabrics 00:18:37.050 rmmod nvme_keyring 00:18:37.050 19:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:37.050 19:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:18:37.050 19:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:18:37.050 19:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 65503 ']' 00:18:37.050 19:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 65503 00:18:37.050 19:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 65503 ']' 00:18:37.050 19:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 65503 00:18:37.050 19:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:18:37.050 19:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:37.050 19:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65503 00:18:37.050 19:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:37.050 19:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:37.050 19:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65503' 00:18:37.050 killing process with pid 65503 00:18:37.050 19:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 65503 00:18:37.050 19:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 65503 00:18:37.050 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:37.050 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:37.050 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:37.050 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:18:37.050 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:18:37.050 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:37.050 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:18:37.050 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:37.050 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:37.050 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:37.050 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:37.050 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:37.050 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:37.050 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:37.050 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:37.050 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:37.050 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:37.050 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:37.050 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:37.309 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:37.309 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:37.309 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:37.309 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:37.309 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.309 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:37.309 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.309 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:18:37.309 00:18:37.309 real 0m25.150s 00:18:37.309 user 0m39.396s 00:18:37.309 sys 0m7.907s 00:18:37.309 ************************************ 00:18:37.309 END TEST nvmf_zcopy 00:18:37.309 ************************************ 00:18:37.309 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:37.309 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:37.309 19:20:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:37.309 19:20:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:37.309 19:20:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:37.309 19:20:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:18:37.309 ************************************ 00:18:37.309 START TEST nvmf_nmic 00:18:37.309 ************************************ 00:18:37.309 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:37.309 * Looking for test storage... 00:18:37.309 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:37.309 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:37.309 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:18:37.309 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:37.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.569 --rc genhtml_branch_coverage=1 00:18:37.569 --rc genhtml_function_coverage=1 00:18:37.569 --rc genhtml_legend=1 00:18:37.569 --rc geninfo_all_blocks=1 00:18:37.569 --rc geninfo_unexecuted_blocks=1 00:18:37.569 00:18:37.569 ' 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:37.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.569 --rc genhtml_branch_coverage=1 00:18:37.569 --rc genhtml_function_coverage=1 00:18:37.569 --rc genhtml_legend=1 00:18:37.569 --rc geninfo_all_blocks=1 00:18:37.569 --rc geninfo_unexecuted_blocks=1 00:18:37.569 00:18:37.569 ' 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:37.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.569 --rc genhtml_branch_coverage=1 00:18:37.569 --rc genhtml_function_coverage=1 00:18:37.569 --rc genhtml_legend=1 00:18:37.569 --rc geninfo_all_blocks=1 00:18:37.569 --rc geninfo_unexecuted_blocks=1 00:18:37.569 00:18:37.569 ' 00:18:37.569 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:37.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.569 --rc genhtml_branch_coverage=1 00:18:37.569 --rc genhtml_function_coverage=1 00:18:37.569 --rc genhtml_legend=1 00:18:37.569 --rc geninfo_all_blocks=1 00:18:37.569 --rc geninfo_unexecuted_blocks=1 00:18:37.569 00:18:37.569 ' 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:37.570 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # nvmf_veth_init 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:37.570 Cannot find device "nvmf_init_br" 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:37.570 Cannot find device "nvmf_init_br2" 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:37.570 Cannot find device "nvmf_tgt_br" 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:37.570 Cannot find device "nvmf_tgt_br2" 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:37.570 Cannot find device "nvmf_init_br" 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:37.570 Cannot find device "nvmf_init_br2" 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:37.570 Cannot find device "nvmf_tgt_br" 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:37.570 Cannot find device "nvmf_tgt_br2" 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:37.570 Cannot find device "nvmf_br" 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:37.570 Cannot find device "nvmf_init_if" 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:18:37.570 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:37.830 Cannot find device "nvmf_init_if2" 00:18:37.830 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:18:37.830 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:37.830 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:37.830 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:18:37.830 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:37.830 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:37.830 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:18:37.830 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:37.830 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:37.830 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:37.830 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:37.830 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:37.830 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:37.830 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:37.830 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:37.830 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:37.830 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:37.830 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:37.830 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:37.830 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:37.830 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:37.830 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:37.830 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:37.830 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:37.830 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:37.830 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:37.830 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:37.830 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:37.830 19:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:37.830 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:37.830 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:37.830 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:37.830 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:37.830 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:37.830 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:37.830 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:37.830 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:37.830 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:37.830 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:37.830 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:37.830 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:37.830 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:18:37.830 00:18:37.830 --- 10.0.0.3 ping statistics --- 00:18:37.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.830 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:18:37.830 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:37.830 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:37.830 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:18:37.830 00:18:37.830 --- 10.0.0.4 ping statistics --- 00:18:37.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.830 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:18:37.830 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:38.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:38.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:18:38.089 00:18:38.089 --- 10.0.0.1 ping statistics --- 00:18:38.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.089 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:18:38.089 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:38.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:38.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:18:38.089 00:18:38.089 --- 10.0.0.2 ping statistics --- 00:18:38.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.089 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:18:38.089 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:38.089 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # return 0 00:18:38.089 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:38.089 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:38.089 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:38.089 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:38.089 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:38.089 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:38.089 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:38.089 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:38.089 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:38.089 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:38.089 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:38.089 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=66026 00:18:38.089 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 66026 00:18:38.089 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:38.089 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 66026 ']' 00:18:38.089 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.089 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:38.089 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.089 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:38.089 19:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:38.089 [2024-10-17 19:20:47.182542] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:18:38.089 [2024-10-17 19:20:47.182677] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.089 [2024-10-17 19:20:47.321583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:38.348 [2024-10-17 19:20:47.395688] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.348 [2024-10-17 19:20:47.395961] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.348 [2024-10-17 19:20:47.396123] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:38.348 [2024-10-17 19:20:47.396361] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:38.348 [2024-10-17 19:20:47.396479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.348 [2024-10-17 19:20:47.398096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.348 [2024-10-17 19:20:47.398254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.348 [2024-10-17 19:20:47.398337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:38.348 [2024-10-17 19:20:47.398339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.348 [2024-10-17 19:20:47.458670] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.349 [2024-10-17 19:20:48.261751] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.349 Malloc0 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.349 [2024-10-17 19:20:48.328839] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:39.349 test case1: single bdev can't be used in multiple subsystems 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.349 [2024-10-17 19:20:48.356696] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:39.349 [2024-10-17 19:20:48.357263] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:39.349 [2024-10-17 19:20:48.357281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.349 request: 00:18:39.349 { 00:18:39.349 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:39.349 "namespace": { 00:18:39.349 "bdev_name": "Malloc0", 00:18:39.349 "no_auto_visible": false 00:18:39.349 }, 00:18:39.349 "method": "nvmf_subsystem_add_ns", 00:18:39.349 "req_id": 1 00:18:39.349 } 00:18:39.349 Got JSON-RPC error response 00:18:39.349 response: 00:18:39.349 { 00:18:39.349 "code": -32602, 00:18:39.349 "message": "Invalid parameters" 00:18:39.349 } 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:39.349 Adding namespace failed - expected result. 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:39.349 test case2: host connect to nvmf target in multiple paths 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.349 [2024-10-17 19:20:48.372902] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid=cb4c864e-bb30-4900-8fc1-989c4e76fc1b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:18:39.349 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid=cb4c864e-bb30-4900-8fc1-989c4e76fc1b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:18:39.608 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:39.608 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:18:39.608 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:39.608 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:39.608 19:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:18:41.509 19:20:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:41.509 19:20:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:41.509 19:20:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:41.509 19:20:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:41.509 19:20:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:41.509 19:20:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:18:41.509 19:20:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:41.509 [global] 00:18:41.509 thread=1 00:18:41.509 invalidate=1 00:18:41.509 rw=write 00:18:41.509 time_based=1 00:18:41.509 runtime=1 00:18:41.509 ioengine=libaio 00:18:41.509 direct=1 00:18:41.509 bs=4096 00:18:41.509 iodepth=1 00:18:41.509 norandommap=0 00:18:41.509 numjobs=1 00:18:41.509 00:18:41.509 verify_dump=1 00:18:41.509 verify_backlog=512 00:18:41.509 verify_state_save=0 00:18:41.509 do_verify=1 00:18:41.509 verify=crc32c-intel 00:18:41.509 [job0] 00:18:41.509 filename=/dev/nvme0n1 00:18:41.509 Could not set queue depth (nvme0n1) 00:18:41.768 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:41.768 fio-3.35 00:18:41.768 Starting 1 thread 00:18:42.808 00:18:42.808 job0: (groupid=0, jobs=1): err= 0: pid=66118: Thu Oct 17 19:20:51 2024 00:18:42.808 read: IOPS=2560, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1000msec) 00:18:42.808 slat (nsec): min=11286, max=50325, avg=13961.76, stdev=3585.03 00:18:42.808 clat (usec): min=139, max=404, avg=205.99, stdev=24.35 00:18:42.808 lat (usec): min=151, max=434, avg=219.95, stdev=24.61 00:18:42.808 clat percentiles (usec): 00:18:42.808 | 1.00th=[ 157], 5.00th=[ 167], 10.00th=[ 176], 20.00th=[ 186], 00:18:42.808 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 212], 00:18:42.808 | 70.00th=[ 219], 80.00th=[ 227], 90.00th=[ 237], 95.00th=[ 245], 00:18:42.808 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 306], 99.95th=[ 367], 00:18:42.808 | 99.99th=[ 404] 00:18:42.808 write: IOPS=2996, BW=11.7MiB/s (12.3MB/s)(11.7MiB/1000msec); 0 zone resets 00:18:42.808 slat (usec): min=16, max=127, avg=20.31, stdev= 5.60 00:18:42.808 clat (usec): min=88, max=284, avg=122.56, stdev=17.11 00:18:42.808 lat (usec): min=108, max=411, avg=142.87, stdev=18.49 00:18:42.808 clat percentiles (usec): 00:18:42.808 | 1.00th=[ 94], 5.00th=[ 98], 10.00th=[ 101], 20.00th=[ 106], 00:18:42.808 | 30.00th=[ 113], 40.00th=[ 118], 50.00th=[ 123], 60.00th=[ 128], 00:18:42.808 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 151], 00:18:42.808 | 99.00th=[ 169], 99.50th=[ 178], 99.90th=[ 210], 99.95th=[ 231], 00:18:42.808 | 99.99th=[ 285] 00:18:42.808 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:18:42.808 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:18:42.808 lat (usec) : 100=4.93%, 250=93.34%, 500=1.73% 00:18:42.808 cpu : usr=2.20%, sys=7.50%, ctx=5556, majf=0, minf=5 00:18:42.808 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:42.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.808 issued rwts: total=2560,2996,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.808 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:42.808 00:18:42.808 Run status group 0 (all jobs): 00:18:42.808 READ: bw=10.0MiB/s (10.5MB/s), 10.0MiB/s-10.0MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1000-1000msec 00:18:42.808 WRITE: bw=11.7MiB/s (12.3MB/s), 11.7MiB/s-11.7MiB/s (12.3MB/s-12.3MB/s), io=11.7MiB (12.3MB), run=1000-1000msec 00:18:42.808 00:18:42.808 Disk stats (read/write): 00:18:42.808 nvme0n1: ios=2427/2560, merge=0/0, ticks=523/343, in_queue=866, util=91.38% 00:18:42.808 19:20:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:42.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:42.808 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:42.808 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:18:42.809 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:42.809 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:42.809 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:42.809 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:43.066 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:18:43.066 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:43.066 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:43.066 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:43.066 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:18:43.066 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:43.066 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:18:43.066 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:43.066 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:43.066 rmmod nvme_tcp 00:18:43.066 rmmod nvme_fabrics 00:18:43.066 rmmod nvme_keyring 00:18:43.066 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:43.066 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:18:43.066 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:18:43.066 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 66026 ']' 00:18:43.066 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 66026 00:18:43.066 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 66026 ']' 00:18:43.066 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 66026 00:18:43.066 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:18:43.066 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:43.066 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66026 00:18:43.066 killing process with pid 66026 00:18:43.066 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:43.066 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:43.066 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66026' 00:18:43.066 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 66026 00:18:43.066 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 66026 00:18:43.324 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:43.324 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:43.324 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:43.324 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:18:43.324 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:18:43.324 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:43.324 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:18:43.324 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:43.324 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:43.324 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:43.324 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:43.324 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:43.324 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:43.324 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:43.324 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:43.324 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:43.324 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:43.324 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:43.583 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:43.583 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:43.583 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:43.583 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:43.583 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:43.583 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.583 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:43.583 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.583 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:18:43.583 00:18:43.583 real 0m6.253s 00:18:43.583 user 0m19.449s 00:18:43.583 sys 0m2.039s 00:18:43.583 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:43.583 ************************************ 00:18:43.583 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:43.583 END TEST nvmf_nmic 00:18:43.583 ************************************ 00:18:43.583 19:20:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:43.583 19:20:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:43.583 19:20:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:43.583 19:20:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:18:43.583 ************************************ 00:18:43.583 START TEST nvmf_fio_target 00:18:43.583 ************************************ 00:18:43.583 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:43.843 * Looking for test storage... 00:18:43.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:43.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.843 --rc genhtml_branch_coverage=1 00:18:43.843 --rc genhtml_function_coverage=1 00:18:43.843 --rc genhtml_legend=1 00:18:43.843 --rc geninfo_all_blocks=1 00:18:43.843 --rc geninfo_unexecuted_blocks=1 00:18:43.843 00:18:43.843 ' 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:43.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.843 --rc genhtml_branch_coverage=1 00:18:43.843 --rc genhtml_function_coverage=1 00:18:43.843 --rc genhtml_legend=1 00:18:43.843 --rc geninfo_all_blocks=1 00:18:43.843 --rc geninfo_unexecuted_blocks=1 00:18:43.843 00:18:43.843 ' 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:43.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.843 --rc genhtml_branch_coverage=1 00:18:43.843 --rc genhtml_function_coverage=1 00:18:43.843 --rc genhtml_legend=1 00:18:43.843 --rc geninfo_all_blocks=1 00:18:43.843 --rc geninfo_unexecuted_blocks=1 00:18:43.843 00:18:43.843 ' 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:43.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.843 --rc genhtml_branch_coverage=1 00:18:43.843 --rc genhtml_function_coverage=1 00:18:43.843 --rc genhtml_legend=1 00:18:43.843 --rc geninfo_all_blocks=1 00:18:43.843 --rc geninfo_unexecuted_blocks=1 00:18:43.843 00:18:43.843 ' 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:43.843 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:43.844 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:43.844 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:43.844 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:43.844 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:43.844 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:43.844 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:43.844 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:43.844 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:43.844 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:43.844 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:43.844 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:43.844 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:43.844 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:43.844 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:43.844 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:43.844 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:43.844 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.844 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:43.844 19:20:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # nvmf_veth_init 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:43.844 Cannot find device "nvmf_init_br" 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:43.844 Cannot find device "nvmf_init_br2" 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:43.844 Cannot find device "nvmf_tgt_br" 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:43.844 Cannot find device "nvmf_tgt_br2" 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:43.844 Cannot find device "nvmf_init_br" 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:43.844 Cannot find device "nvmf_init_br2" 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:43.844 Cannot find device "nvmf_tgt_br" 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:18:43.844 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:44.103 Cannot find device "nvmf_tgt_br2" 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:44.103 Cannot find device "nvmf_br" 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:44.103 Cannot find device "nvmf_init_if" 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:44.103 Cannot find device "nvmf_init_if2" 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:44.103 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:44.103 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:44.103 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:44.362 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:44.362 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:18:44.362 00:18:44.362 --- 10.0.0.3 ping statistics --- 00:18:44.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.362 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:44.362 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:44.362 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:18:44.362 00:18:44.362 --- 10.0.0.4 ping statistics --- 00:18:44.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.362 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:44.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:44.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:18:44.362 00:18:44.362 --- 10.0.0.1 ping statistics --- 00:18:44.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.362 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:44.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:44.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:18:44.362 00:18:44.362 --- 10.0.0.2 ping statistics --- 00:18:44.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.362 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # return 0 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=66353 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 66353 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 66353 ']' 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:44.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:44.362 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.362 [2024-10-17 19:20:53.501803] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:18:44.362 [2024-10-17 19:20:53.502637] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.621 [2024-10-17 19:20:53.647866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:44.621 [2024-10-17 19:20:53.720714] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.621 [2024-10-17 19:20:53.721081] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.621 [2024-10-17 19:20:53.721263] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.621 [2024-10-17 19:20:53.721415] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.621 [2024-10-17 19:20:53.721457] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.621 [2024-10-17 19:20:53.722933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.621 [2024-10-17 19:20:53.723073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:44.621 [2024-10-17 19:20:53.723697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:44.621 [2024-10-17 19:20:53.723709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.621 [2024-10-17 19:20:53.783890] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:44.621 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:44.621 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:18:44.621 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:44.621 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:44.621 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.880 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.880 19:20:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:45.178 [2024-10-17 19:20:54.215853] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.178 19:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:45.470 19:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:45.470 19:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:45.729 19:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:45.729 19:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:45.988 19:20:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:45.988 19:20:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:46.247 19:20:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:46.247 19:20:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:46.505 19:20:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:47.070 19:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:47.070 19:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:47.328 19:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:47.328 19:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:47.586 19:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:47.586 19:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:47.844 19:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:48.120 19:20:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:48.120 19:20:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:48.404 19:20:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:48.404 19:20:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:48.663 19:20:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:48.922 [2024-10-17 19:20:57.931882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:48.922 19:20:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:49.180 19:20:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:49.438 19:20:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid=cb4c864e-bb30-4900-8fc1-989c4e76fc1b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:18:49.438 19:20:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:49.438 19:20:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:18:49.438 19:20:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:49.438 19:20:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:18:49.438 19:20:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:18:49.438 19:20:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:18:52.033 19:21:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:52.034 19:21:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:52.034 19:21:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:52.034 19:21:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:18:52.034 19:21:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:52.034 19:21:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:18:52.034 19:21:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:52.034 [global] 00:18:52.034 thread=1 00:18:52.034 invalidate=1 00:18:52.034 rw=write 00:18:52.034 time_based=1 00:18:52.034 runtime=1 00:18:52.034 ioengine=libaio 00:18:52.034 direct=1 00:18:52.034 bs=4096 00:18:52.034 iodepth=1 00:18:52.034 norandommap=0 00:18:52.034 numjobs=1 00:18:52.034 00:18:52.034 verify_dump=1 00:18:52.034 verify_backlog=512 00:18:52.034 verify_state_save=0 00:18:52.034 do_verify=1 00:18:52.034 verify=crc32c-intel 00:18:52.034 [job0] 00:18:52.034 filename=/dev/nvme0n1 00:18:52.034 [job1] 00:18:52.034 filename=/dev/nvme0n2 00:18:52.034 [job2] 00:18:52.034 filename=/dev/nvme0n3 00:18:52.034 [job3] 00:18:52.034 filename=/dev/nvme0n4 00:18:52.034 Could not set queue depth (nvme0n1) 00:18:52.034 Could not set queue depth (nvme0n2) 00:18:52.034 Could not set queue depth (nvme0n3) 00:18:52.034 Could not set queue depth (nvme0n4) 00:18:52.034 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:52.034 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:52.034 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:52.034 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:52.034 fio-3.35 00:18:52.034 Starting 4 threads 00:18:52.969 00:18:52.969 job0: (groupid=0, jobs=1): err= 0: pid=66537: Thu Oct 17 19:21:02 2024 00:18:52.969 read: IOPS=1610, BW=6442KiB/s (6596kB/s)(6448KiB/1001msec) 00:18:52.969 slat (nsec): min=11796, max=64533, avg=16702.59, stdev=6640.87 00:18:52.969 clat (usec): min=162, max=889, avg=315.85, stdev=65.03 00:18:52.969 lat (usec): min=178, max=932, avg=332.55, stdev=68.56 00:18:52.969 clat percentiles (usec): 00:18:52.969 | 1.00th=[ 190], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 285], 00:18:52.969 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 310], 00:18:52.969 | 70.00th=[ 318], 80.00th=[ 326], 90.00th=[ 359], 95.00th=[ 482], 00:18:52.969 | 99.00th=[ 553], 99.50th=[ 570], 99.90th=[ 824], 99.95th=[ 889], 00:18:52.969 | 99.99th=[ 889] 00:18:52.969 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:18:52.969 slat (nsec): min=17395, max=92844, avg=22716.88, stdev=5572.65 00:18:52.969 clat (usec): min=103, max=1711, avg=200.44, stdev=49.73 00:18:52.969 lat (usec): min=121, max=1753, avg=223.15, stdev=50.65 00:18:52.969 clat percentiles (usec): 00:18:52.969 | 1.00th=[ 116], 5.00th=[ 127], 10.00th=[ 139], 20.00th=[ 180], 00:18:52.969 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 212], 00:18:52.969 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 237], 95.00th=[ 247], 00:18:52.969 | 99.00th=[ 269], 99.50th=[ 289], 99.90th=[ 379], 99.95th=[ 594], 00:18:52.969 | 99.99th=[ 1713] 00:18:52.969 bw ( KiB/s): min= 8175, max= 8175, per=21.59%, avg=8175.00, stdev= 0.00, samples=1 00:18:52.969 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:18:52.969 lat (usec) : 250=54.97%, 500=43.09%, 750=1.83%, 1000=0.08% 00:18:52.969 lat (msec) : 2=0.03% 00:18:52.969 cpu : usr=2.00%, sys=5.40%, ctx=3660, majf=0, minf=7 00:18:52.969 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:52.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.969 issued rwts: total=1612,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.969 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:52.969 job1: (groupid=0, jobs=1): err= 0: pid=66541: Thu Oct 17 19:21:02 2024 00:18:52.969 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:18:52.969 slat (nsec): min=11903, max=45138, avg=15756.85, stdev=3330.33 00:18:52.969 clat (usec): min=223, max=558, avg=308.50, stdev=35.20 00:18:52.969 lat (usec): min=241, max=582, avg=324.25, stdev=35.73 00:18:52.969 clat percentiles (usec): 00:18:52.969 | 1.00th=[ 258], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 285], 00:18:52.969 | 30.00th=[ 293], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 310], 00:18:52.969 | 70.00th=[ 314], 80.00th=[ 322], 90.00th=[ 338], 95.00th=[ 379], 00:18:52.969 | 99.00th=[ 445], 99.50th=[ 506], 99.90th=[ 545], 99.95th=[ 562], 00:18:52.969 | 99.99th=[ 562] 00:18:52.969 write: IOPS=1974, BW=7896KiB/s (8086kB/s)(7904KiB/1001msec); 0 zone resets 00:18:52.969 slat (usec): min=15, max=115, avg=25.56, stdev= 8.39 00:18:52.969 clat (usec): min=100, max=7137, avg=225.04, stdev=177.63 00:18:52.969 lat (usec): min=122, max=7156, avg=250.61, stdev=179.28 00:18:52.969 clat percentiles (usec): 00:18:52.969 | 1.00th=[ 125], 5.00th=[ 139], 10.00th=[ 155], 20.00th=[ 188], 00:18:52.969 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:18:52.969 | 70.00th=[ 225], 80.00th=[ 239], 90.00th=[ 318], 95.00th=[ 363], 00:18:52.969 | 99.00th=[ 404], 99.50th=[ 416], 99.90th=[ 2900], 99.95th=[ 7111], 00:18:52.969 | 99.99th=[ 7111] 00:18:52.969 bw ( KiB/s): min= 8175, max= 8175, per=21.59%, avg=8175.00, stdev= 0.00, samples=1 00:18:52.969 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:18:52.969 lat (usec) : 250=47.92%, 500=51.71%, 750=0.28%, 1000=0.03% 00:18:52.969 lat (msec) : 4=0.03%, 10=0.03% 00:18:52.969 cpu : usr=1.20%, sys=6.20%, ctx=3515, majf=0, minf=9 00:18:52.969 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:52.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.969 issued rwts: total=1536,1976,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.969 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:52.969 job2: (groupid=0, jobs=1): err= 0: pid=66542: Thu Oct 17 19:21:02 2024 00:18:52.969 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:18:52.969 slat (nsec): min=11602, max=40720, avg=13491.97, stdev=2551.66 00:18:52.969 clat (usec): min=159, max=1823, avg=199.92, stdev=37.20 00:18:52.969 lat (usec): min=174, max=1835, avg=213.42, stdev=37.36 00:18:52.969 clat percentiles (usec): 00:18:52.969 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 188], 00:18:52.969 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 200], 00:18:52.969 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 219], 95.00th=[ 227], 00:18:52.969 | 99.00th=[ 255], 99.50th=[ 277], 99.90th=[ 482], 99.95th=[ 603], 00:18:52.969 | 99.99th=[ 1827] 00:18:52.969 write: IOPS=2765, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1001msec); 0 zone resets 00:18:52.969 slat (nsec): min=13892, max=90748, avg=19809.64, stdev=4525.29 00:18:52.969 clat (usec): min=112, max=337, avg=141.09, stdev=14.43 00:18:52.970 lat (usec): min=130, max=373, avg=160.90, stdev=16.07 00:18:52.970 clat percentiles (usec): 00:18:52.970 | 1.00th=[ 118], 5.00th=[ 124], 10.00th=[ 126], 20.00th=[ 130], 00:18:52.970 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 143], 00:18:52.970 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 159], 95.00th=[ 167], 00:18:52.970 | 99.00th=[ 184], 99.50th=[ 190], 99.90th=[ 212], 99.95th=[ 285], 00:18:52.970 | 99.99th=[ 338] 00:18:52.970 bw ( KiB/s): min=12263, max=12263, per=32.38%, avg=12263.00, stdev= 0.00, samples=1 00:18:52.970 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:18:52.970 lat (usec) : 250=99.38%, 500=0.58%, 750=0.02% 00:18:52.970 lat (msec) : 2=0.02% 00:18:52.970 cpu : usr=2.20%, sys=6.90%, ctx=5328, majf=0, minf=15 00:18:52.970 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:52.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.970 issued rwts: total=2560,2768,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.970 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:52.970 job3: (groupid=0, jobs=1): err= 0: pid=66543: Thu Oct 17 19:21:02 2024 00:18:52.970 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:18:52.970 slat (nsec): min=11342, max=49262, avg=13520.79, stdev=2236.22 00:18:52.970 clat (usec): min=156, max=1925, avg=199.51, stdev=48.91 00:18:52.970 lat (usec): min=168, max=1938, avg=213.03, stdev=49.02 00:18:52.970 clat percentiles (usec): 00:18:52.970 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 186], 00:18:52.970 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 200], 00:18:52.970 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 221], 95.00th=[ 227], 00:18:52.970 | 99.00th=[ 245], 99.50th=[ 255], 99.90th=[ 775], 99.95th=[ 1631], 00:18:52.970 | 99.99th=[ 1926] 00:18:52.970 write: IOPS=2689, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1002msec); 0 zone resets 00:18:52.970 slat (usec): min=13, max=114, avg=19.87, stdev= 3.87 00:18:52.970 clat (usec): min=113, max=293, avg=145.74, stdev=16.39 00:18:52.970 lat (usec): min=131, max=407, avg=165.61, stdev=17.75 00:18:52.970 clat percentiles (usec): 00:18:52.970 | 1.00th=[ 118], 5.00th=[ 124], 10.00th=[ 127], 20.00th=[ 133], 00:18:52.970 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 149], 00:18:52.970 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 167], 95.00th=[ 176], 00:18:52.970 | 99.00th=[ 192], 99.50th=[ 200], 99.90th=[ 221], 99.95th=[ 221], 00:18:52.970 | 99.99th=[ 293] 00:18:52.970 bw ( KiB/s): min=12263, max=12263, per=32.38%, avg=12263.00, stdev= 0.00, samples=1 00:18:52.970 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:18:52.970 lat (usec) : 250=99.68%, 500=0.25%, 750=0.02%, 1000=0.02% 00:18:52.970 lat (msec) : 2=0.04% 00:18:52.970 cpu : usr=2.20%, sys=6.79%, ctx=5257, majf=0, minf=7 00:18:52.970 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:52.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.970 issued rwts: total=2560,2695,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.970 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:52.970 00:18:52.970 Run status group 0 (all jobs): 00:18:52.970 READ: bw=32.2MiB/s (33.8MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=32.3MiB (33.9MB), run=1001-1002msec 00:18:52.970 WRITE: bw=37.0MiB/s (38.8MB/s), 7896KiB/s-10.8MiB/s (8086kB/s-11.3MB/s), io=37.1MiB (38.9MB), run=1001-1002msec 00:18:52.970 00:18:52.970 Disk stats (read/write): 00:18:52.970 nvme0n1: ios=1586/1564, merge=0/0, ticks=514/317, in_queue=831, util=87.66% 00:18:52.970 nvme0n2: ios=1505/1536, merge=0/0, ticks=502/356, in_queue=858, util=88.97% 00:18:52.970 nvme0n3: ios=2070/2560, merge=0/0, ticks=417/384, in_queue=801, util=89.21% 00:18:52.970 nvme0n4: ios=2048/2513, merge=0/0, ticks=420/384, in_queue=804, util=89.78% 00:18:52.970 19:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:52.970 [global] 00:18:52.970 thread=1 00:18:52.970 invalidate=1 00:18:52.970 rw=randwrite 00:18:52.970 time_based=1 00:18:52.970 runtime=1 00:18:52.970 ioengine=libaio 00:18:52.970 direct=1 00:18:52.970 bs=4096 00:18:52.970 iodepth=1 00:18:52.970 norandommap=0 00:18:52.970 numjobs=1 00:18:52.970 00:18:52.970 verify_dump=1 00:18:52.970 verify_backlog=512 00:18:52.970 verify_state_save=0 00:18:52.970 do_verify=1 00:18:52.970 verify=crc32c-intel 00:18:52.970 [job0] 00:18:52.970 filename=/dev/nvme0n1 00:18:52.970 [job1] 00:18:52.970 filename=/dev/nvme0n2 00:18:52.970 [job2] 00:18:52.970 filename=/dev/nvme0n3 00:18:52.970 [job3] 00:18:52.970 filename=/dev/nvme0n4 00:18:52.970 Could not set queue depth (nvme0n1) 00:18:52.970 Could not set queue depth (nvme0n2) 00:18:52.970 Could not set queue depth (nvme0n3) 00:18:52.970 Could not set queue depth (nvme0n4) 00:18:52.970 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:52.970 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:52.970 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:52.970 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:52.970 fio-3.35 00:18:52.970 Starting 4 threads 00:18:54.344 00:18:54.344 job0: (groupid=0, jobs=1): err= 0: pid=66597: Thu Oct 17 19:21:03 2024 00:18:54.344 read: IOPS=2106, BW=8428KiB/s (8630kB/s)(8436KiB/1001msec) 00:18:54.344 slat (nsec): min=11728, max=40011, avg=14196.21, stdev=2947.13 00:18:54.344 clat (usec): min=145, max=472, avg=220.27, stdev=31.34 00:18:54.344 lat (usec): min=158, max=484, avg=234.47, stdev=31.52 00:18:54.344 clat percentiles (usec): 00:18:54.344 | 1.00th=[ 159], 5.00th=[ 176], 10.00th=[ 184], 20.00th=[ 196], 00:18:54.344 | 30.00th=[ 202], 40.00th=[ 210], 50.00th=[ 219], 60.00th=[ 227], 00:18:54.344 | 70.00th=[ 235], 80.00th=[ 245], 90.00th=[ 260], 95.00th=[ 273], 00:18:54.344 | 99.00th=[ 306], 99.50th=[ 314], 99.90th=[ 416], 99.95th=[ 461], 00:18:54.344 | 99.99th=[ 474] 00:18:54.344 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:18:54.344 slat (usec): min=14, max=144, avg=21.70, stdev= 5.20 00:18:54.344 clat (usec): min=99, max=546, avg=172.39, stdev=32.71 00:18:54.344 lat (usec): min=118, max=564, avg=194.09, stdev=34.45 00:18:54.344 clat percentiles (usec): 00:18:54.344 | 1.00th=[ 112], 5.00th=[ 124], 10.00th=[ 131], 20.00th=[ 145], 00:18:54.344 | 30.00th=[ 155], 40.00th=[ 165], 50.00th=[ 174], 60.00th=[ 180], 00:18:54.344 | 70.00th=[ 188], 80.00th=[ 198], 90.00th=[ 210], 95.00th=[ 223], 00:18:54.344 | 99.00th=[ 265], 99.50th=[ 281], 99.90th=[ 355], 99.95th=[ 445], 00:18:54.344 | 99.99th=[ 545] 00:18:54.344 bw ( KiB/s): min= 9496, max= 9496, per=26.89%, avg=9496.00, stdev= 0.00, samples=1 00:18:54.344 iops : min= 2374, max= 2374, avg=2374.00, stdev= 0.00, samples=1 00:18:54.344 lat (usec) : 100=0.02%, 250=91.88%, 500=8.07%, 750=0.02% 00:18:54.344 cpu : usr=2.50%, sys=6.20%, ctx=4669, majf=0, minf=15 00:18:54.344 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:54.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.344 issued rwts: total=2109,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.344 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:54.344 job1: (groupid=0, jobs=1): err= 0: pid=66598: Thu Oct 17 19:21:03 2024 00:18:54.344 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:18:54.344 slat (nsec): min=11198, max=61556, avg=13790.24, stdev=3165.11 00:18:54.344 clat (usec): min=180, max=1875, avg=243.27, stdev=46.57 00:18:54.344 lat (usec): min=193, max=1887, avg=257.06, stdev=46.67 00:18:54.344 clat percentiles (usec): 00:18:54.344 | 1.00th=[ 192], 5.00th=[ 202], 10.00th=[ 210], 20.00th=[ 219], 00:18:54.344 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 245], 00:18:54.344 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 293], 00:18:54.344 | 99.00th=[ 322], 99.50th=[ 343], 99.90th=[ 510], 99.95th=[ 553], 00:18:54.344 | 99.99th=[ 1876] 00:18:54.344 write: IOPS=2180, BW=8723KiB/s (8933kB/s)(8732KiB/1001msec); 0 zone resets 00:18:54.344 slat (usec): min=17, max=145, avg=22.34, stdev= 7.06 00:18:54.344 clat (usec): min=126, max=835, avg=191.00, stdev=27.26 00:18:54.344 lat (usec): min=149, max=880, avg=213.34, stdev=29.60 00:18:54.344 clat percentiles (usec): 00:18:54.344 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 174], 00:18:54.344 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 194], 00:18:54.344 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 217], 95.00th=[ 229], 00:18:54.344 | 99.00th=[ 253], 99.50th=[ 265], 99.90th=[ 494], 99.95th=[ 578], 00:18:54.344 | 99.99th=[ 840] 00:18:54.344 bw ( KiB/s): min= 8856, max= 8856, per=25.07%, avg=8856.00, stdev= 0.00, samples=1 00:18:54.344 iops : min= 2214, max= 2214, avg=2214.00, stdev= 0.00, samples=1 00:18:54.344 lat (usec) : 250=82.23%, 500=17.66%, 750=0.07%, 1000=0.02% 00:18:54.344 lat (msec) : 2=0.02% 00:18:54.344 cpu : usr=1.80%, sys=6.10%, ctx=4231, majf=0, minf=9 00:18:54.344 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:54.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.344 issued rwts: total=2048,2183,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.344 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:54.344 job2: (groupid=0, jobs=1): err= 0: pid=66599: Thu Oct 17 19:21:03 2024 00:18:54.344 read: IOPS=1959, BW=7836KiB/s (8024kB/s)(7844KiB/1001msec) 00:18:54.344 slat (nsec): min=10075, max=80985, avg=14848.62, stdev=4158.06 00:18:54.344 clat (usec): min=166, max=466, avg=248.19, stdev=47.95 00:18:54.344 lat (usec): min=180, max=484, avg=263.04, stdev=47.19 00:18:54.344 clat percentiles (usec): 00:18:54.344 | 1.00th=[ 176], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 206], 00:18:54.344 | 30.00th=[ 219], 40.00th=[ 229], 50.00th=[ 239], 60.00th=[ 251], 00:18:54.344 | 70.00th=[ 265], 80.00th=[ 293], 90.00th=[ 322], 95.00th=[ 343], 00:18:54.344 | 99.00th=[ 371], 99.50th=[ 379], 99.90th=[ 412], 99.95th=[ 465], 00:18:54.344 | 99.99th=[ 465] 00:18:54.344 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:18:54.344 slat (usec): min=10, max=158, avg=24.86, stdev= 8.43 00:18:54.344 clat (usec): min=109, max=8215, avg=207.40, stdev=254.40 00:18:54.344 lat (usec): min=130, max=8236, avg=232.27, stdev=254.41 00:18:54.344 clat percentiles (usec): 00:18:54.344 | 1.00th=[ 130], 5.00th=[ 141], 10.00th=[ 149], 20.00th=[ 157], 00:18:54.344 | 30.00th=[ 167], 40.00th=[ 176], 50.00th=[ 186], 60.00th=[ 196], 00:18:54.344 | 70.00th=[ 206], 80.00th=[ 223], 90.00th=[ 255], 95.00th=[ 285], 00:18:54.344 | 99.00th=[ 330], 99.50th=[ 388], 99.90th=[ 3752], 99.95th=[ 3949], 00:18:54.344 | 99.99th=[ 8225] 00:18:54.344 bw ( KiB/s): min= 9288, max= 9288, per=26.30%, avg=9288.00, stdev= 0.00, samples=1 00:18:54.344 iops : min= 2322, max= 2322, avg=2322.00, stdev= 0.00, samples=1 00:18:54.344 lat (usec) : 250=74.81%, 500=24.97%, 1000=0.02% 00:18:54.344 lat (msec) : 2=0.02%, 4=0.15%, 10=0.02% 00:18:54.344 cpu : usr=2.00%, sys=6.30%, ctx=4011, majf=0, minf=11 00:18:54.344 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:54.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.344 issued rwts: total=1961,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.344 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:54.344 job3: (groupid=0, jobs=1): err= 0: pid=66600: Thu Oct 17 19:21:03 2024 00:18:54.344 read: IOPS=2009, BW=8040KiB/s (8233kB/s)(8048KiB/1001msec) 00:18:54.344 slat (nsec): min=8286, max=49189, avg=15569.18, stdev=4039.99 00:18:54.344 clat (usec): min=160, max=1831, avg=255.78, stdev=82.64 00:18:54.344 lat (usec): min=173, max=1852, avg=271.35, stdev=83.71 00:18:54.344 clat percentiles (usec): 00:18:54.344 | 1.00th=[ 172], 5.00th=[ 184], 10.00th=[ 194], 20.00th=[ 208], 00:18:54.344 | 30.00th=[ 221], 40.00th=[ 233], 50.00th=[ 243], 60.00th=[ 253], 00:18:54.344 | 70.00th=[ 269], 80.00th=[ 293], 90.00th=[ 326], 95.00th=[ 347], 00:18:54.344 | 99.00th=[ 465], 99.50th=[ 603], 99.90th=[ 1319], 99.95th=[ 1467], 00:18:54.344 | 99.99th=[ 1827] 00:18:54.344 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:18:54.344 slat (usec): min=17, max=158, avg=22.97, stdev= 6.96 00:18:54.344 clat (usec): min=111, max=384, avg=194.87, stdev=36.71 00:18:54.344 lat (usec): min=129, max=494, avg=217.83, stdev=39.18 00:18:54.344 clat percentiles (usec): 00:18:54.344 | 1.00th=[ 130], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 165], 00:18:54.344 | 30.00th=[ 174], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 200], 00:18:54.344 | 70.00th=[ 208], 80.00th=[ 219], 90.00th=[ 245], 95.00th=[ 269], 00:18:54.344 | 99.00th=[ 297], 99.50th=[ 314], 99.90th=[ 338], 99.95th=[ 359], 00:18:54.344 | 99.99th=[ 383] 00:18:54.344 bw ( KiB/s): min= 9064, max= 9064, per=25.66%, avg=9064.00, stdev= 0.00, samples=1 00:18:54.344 iops : min= 2266, max= 2266, avg=2266.00, stdev= 0.00, samples=1 00:18:54.344 lat (usec) : 250=73.89%, 500=25.71%, 750=0.20%, 1000=0.05% 00:18:54.344 lat (msec) : 2=0.15% 00:18:54.344 cpu : usr=1.90%, sys=6.30%, ctx=4060, majf=0, minf=11 00:18:54.344 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:54.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.344 issued rwts: total=2012,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.344 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:54.344 00:18:54.344 Run status group 0 (all jobs): 00:18:54.344 READ: bw=31.7MiB/s (33.3MB/s), 7836KiB/s-8428KiB/s (8024kB/s-8630kB/s), io=31.8MiB (33.3MB), run=1001-1001msec 00:18:54.345 WRITE: bw=34.5MiB/s (36.2MB/s), 8184KiB/s-9.99MiB/s (8380kB/s-10.5MB/s), io=34.5MiB (36.2MB), run=1001-1001msec 00:18:54.345 00:18:54.345 Disk stats (read/write): 00:18:54.345 nvme0n1: ios=1959/2048, merge=0/0, ticks=457/382, in_queue=839, util=88.78% 00:18:54.345 nvme0n2: ios=1747/2048, merge=0/0, ticks=443/401, in_queue=844, util=89.51% 00:18:54.345 nvme0n3: ios=1579/2048, merge=0/0, ticks=373/405, in_queue=778, util=87.70% 00:18:54.345 nvme0n4: ios=1630/2048, merge=0/0, ticks=393/423, in_queue=816, util=89.80% 00:18:54.345 19:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:54.345 [global] 00:18:54.345 thread=1 00:18:54.345 invalidate=1 00:18:54.345 rw=write 00:18:54.345 time_based=1 00:18:54.345 runtime=1 00:18:54.345 ioengine=libaio 00:18:54.345 direct=1 00:18:54.345 bs=4096 00:18:54.345 iodepth=128 00:18:54.345 norandommap=0 00:18:54.345 numjobs=1 00:18:54.345 00:18:54.345 verify_dump=1 00:18:54.345 verify_backlog=512 00:18:54.345 verify_state_save=0 00:18:54.345 do_verify=1 00:18:54.345 verify=crc32c-intel 00:18:54.345 [job0] 00:18:54.345 filename=/dev/nvme0n1 00:18:54.345 [job1] 00:18:54.345 filename=/dev/nvme0n2 00:18:54.345 [job2] 00:18:54.345 filename=/dev/nvme0n3 00:18:54.345 [job3] 00:18:54.345 filename=/dev/nvme0n4 00:18:54.345 Could not set queue depth (nvme0n1) 00:18:54.345 Could not set queue depth (nvme0n2) 00:18:54.345 Could not set queue depth (nvme0n3) 00:18:54.345 Could not set queue depth (nvme0n4) 00:18:54.345 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:54.345 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:54.345 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:54.345 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:54.345 fio-3.35 00:18:54.345 Starting 4 threads 00:18:55.721 00:18:55.721 job0: (groupid=0, jobs=1): err= 0: pid=66657: Thu Oct 17 19:21:04 2024 00:18:55.721 read: IOPS=2660, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1004msec) 00:18:55.721 slat (usec): min=6, max=7654, avg=176.67, stdev=736.37 00:18:55.721 clat (usec): min=881, max=31115, avg=21829.72, stdev=3068.99 00:18:55.721 lat (usec): min=4650, max=31158, avg=22006.39, stdev=3116.80 00:18:55.721 clat percentiles (usec): 00:18:55.721 | 1.00th=[ 5145], 5.00th=[17957], 10.00th=[19006], 20.00th=[20841], 00:18:55.721 | 30.00th=[21627], 40.00th=[21627], 50.00th=[21890], 60.00th=[22152], 00:18:55.721 | 70.00th=[22676], 80.00th=[23200], 90.00th=[25035], 95.00th=[26346], 00:18:55.721 | 99.00th=[28967], 99.50th=[29492], 99.90th=[30802], 99.95th=[30802], 00:18:55.721 | 99.99th=[31065] 00:18:55.721 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:18:55.721 slat (usec): min=12, max=6435, avg=164.03, stdev=678.73 00:18:55.721 clat (usec): min=15996, max=30935, avg=22214.70, stdev=2150.78 00:18:55.721 lat (usec): min=16021, max=30958, avg=22378.74, stdev=2219.26 00:18:55.721 clat percentiles (usec): 00:18:55.721 | 1.00th=[17695], 5.00th=[19530], 10.00th=[20055], 20.00th=[20841], 00:18:55.721 | 30.00th=[21103], 40.00th=[21627], 50.00th=[22152], 60.00th=[22152], 00:18:55.722 | 70.00th=[22414], 80.00th=[22938], 90.00th=[24511], 95.00th=[27657], 00:18:55.722 | 99.00th=[29230], 99.50th=[29492], 99.90th=[30802], 99.95th=[30802], 00:18:55.722 | 99.99th=[31065] 00:18:55.722 bw ( KiB/s): min=12152, max=12288, per=21.44%, avg=12220.00, stdev=96.17, samples=2 00:18:55.722 iops : min= 3038, max= 3072, avg=3055.00, stdev=24.04, samples=2 00:18:55.722 lat (usec) : 1000=0.02% 00:18:55.722 lat (msec) : 10=0.64%, 20=10.26%, 50=89.08% 00:18:55.722 cpu : usr=3.09%, sys=9.67%, ctx=352, majf=0, minf=6 00:18:55.722 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:55.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:55.722 issued rwts: total=2671,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:55.722 job1: (groupid=0, jobs=1): err= 0: pid=66658: Thu Oct 17 19:21:04 2024 00:18:55.722 read: IOPS=3911, BW=15.3MiB/s (16.0MB/s)(15.3MiB/1002msec) 00:18:55.722 slat (usec): min=6, max=6696, avg=127.53, stdev=583.81 00:18:55.722 clat (usec): min=863, max=22873, avg=16320.87, stdev=1919.98 00:18:55.722 lat (usec): min=2435, max=22902, avg=16448.40, stdev=1922.38 00:18:55.722 clat percentiles (usec): 00:18:55.722 | 1.00th=[ 8848], 5.00th=[13698], 10.00th=[14615], 20.00th=[15664], 00:18:55.722 | 30.00th=[16057], 40.00th=[16188], 50.00th=[16319], 60.00th=[16581], 00:18:55.722 | 70.00th=[16909], 80.00th=[17433], 90.00th=[17957], 95.00th=[18482], 00:18:55.722 | 99.00th=[21103], 99.50th=[21627], 99.90th=[22938], 99.95th=[22938], 00:18:55.722 | 99.99th=[22938] 00:18:55.722 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:18:55.722 slat (usec): min=8, max=7133, avg=113.83, stdev=703.77 00:18:55.722 clat (usec): min=7545, max=23116, avg=15283.17, stdev=1471.90 00:18:55.722 lat (usec): min=7584, max=23161, avg=15397.00, stdev=1608.80 00:18:55.722 clat percentiles (usec): 00:18:55.722 | 1.00th=[11469], 5.00th=[13435], 10.00th=[14222], 20.00th=[14615], 00:18:55.722 | 30.00th=[14746], 40.00th=[14877], 50.00th=[15139], 60.00th=[15401], 00:18:55.722 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16712], 95.00th=[17695], 00:18:55.722 | 99.00th=[21103], 99.50th=[21365], 99.90th=[22152], 99.95th=[22414], 00:18:55.722 | 99.99th=[23200] 00:18:55.722 bw ( KiB/s): min=16384, max=16384, per=28.74%, avg=16384.00, stdev= 0.00, samples=2 00:18:55.722 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:18:55.722 lat (usec) : 1000=0.01% 00:18:55.722 lat (msec) : 4=0.22%, 10=0.70%, 20=96.74%, 50=2.32% 00:18:55.722 cpu : usr=3.90%, sys=11.49%, ctx=247, majf=0, minf=1 00:18:55.722 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:55.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:55.722 issued rwts: total=3919,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:55.722 job2: (groupid=0, jobs=1): err= 0: pid=66659: Thu Oct 17 19:21:04 2024 00:18:55.722 read: IOPS=3277, BW=12.8MiB/s (13.4MB/s)(12.9MiB/1006msec) 00:18:55.722 slat (usec): min=6, max=8818, avg=144.84, stdev=727.92 00:18:55.722 clat (usec): min=1771, max=26289, avg=18748.24, stdev=2316.43 00:18:55.722 lat (usec): min=5572, max=26304, avg=18893.07, stdev=2210.10 00:18:55.722 clat percentiles (usec): 00:18:55.722 | 1.00th=[ 6325], 5.00th=[15270], 10.00th=[17433], 20.00th=[17957], 00:18:55.722 | 30.00th=[18220], 40.00th=[18482], 50.00th=[18744], 60.00th=[19006], 00:18:55.722 | 70.00th=[19268], 80.00th=[19792], 90.00th=[20579], 95.00th=[21627], 00:18:55.722 | 99.00th=[25822], 99.50th=[26346], 99.90th=[26346], 99.95th=[26346], 00:18:55.722 | 99.99th=[26346] 00:18:55.722 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:18:55.722 slat (usec): min=9, max=6850, avg=138.57, stdev=651.16 00:18:55.722 clat (usec): min=12555, max=22150, avg=18128.69, stdev=1277.28 00:18:55.722 lat (usec): min=14628, max=22165, avg=18267.25, stdev=1104.37 00:18:55.722 clat percentiles (usec): 00:18:55.722 | 1.00th=[14091], 5.00th=[16188], 10.00th=[16909], 20.00th=[17433], 00:18:55.722 | 30.00th=[17695], 40.00th=[17695], 50.00th=[17957], 60.00th=[18220], 00:18:55.722 | 70.00th=[18482], 80.00th=[19006], 90.00th=[19792], 95.00th=[20579], 00:18:55.722 | 99.00th=[21627], 99.50th=[22152], 99.90th=[22152], 99.95th=[22152], 00:18:55.722 | 99.99th=[22152] 00:18:55.722 bw ( KiB/s): min=13603, max=15096, per=25.17%, avg=14349.50, stdev=1055.71, samples=2 00:18:55.722 iops : min= 3400, max= 3774, avg=3587.00, stdev=264.46, samples=2 00:18:55.722 lat (msec) : 2=0.01%, 10=0.47%, 20=87.37%, 50=12.15% 00:18:55.722 cpu : usr=3.28%, sys=9.95%, ctx=216, majf=0, minf=1 00:18:55.722 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:55.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:55.722 issued rwts: total=3297,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:55.722 job3: (groupid=0, jobs=1): err= 0: pid=66660: Thu Oct 17 19:21:04 2024 00:18:55.722 read: IOPS=3382, BW=13.2MiB/s (13.9MB/s)(13.3MiB/1003msec) 00:18:55.722 slat (usec): min=6, max=5405, avg=141.00, stdev=695.71 00:18:55.722 clat (usec): min=443, max=21239, avg=18225.67, stdev=2067.05 00:18:55.722 lat (usec): min=3843, max=21267, avg=18366.67, stdev=1952.11 00:18:55.722 clat percentiles (usec): 00:18:55.722 | 1.00th=[ 9241], 5.00th=[15008], 10.00th=[17171], 20.00th=[17695], 00:18:55.722 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18482], 60.00th=[18482], 00:18:55.722 | 70.00th=[18744], 80.00th=[19268], 90.00th=[20317], 95.00th=[20317], 00:18:55.722 | 99.00th=[21103], 99.50th=[21103], 99.90th=[21103], 99.95th=[21103], 00:18:55.722 | 99.99th=[21365] 00:18:55.722 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:18:55.722 slat (usec): min=9, max=4978, avg=137.11, stdev=632.28 00:18:55.722 clat (usec): min=13159, max=21036, avg=17978.10, stdev=1085.70 00:18:55.722 lat (usec): min=15154, max=21063, avg=18115.20, stdev=886.46 00:18:55.722 clat percentiles (usec): 00:18:55.722 | 1.00th=[14091], 5.00th=[16712], 10.00th=[16909], 20.00th=[17171], 00:18:55.722 | 30.00th=[17433], 40.00th=[17695], 50.00th=[17957], 60.00th=[18220], 00:18:55.722 | 70.00th=[18482], 80.00th=[18744], 90.00th=[19268], 95.00th=[19792], 00:18:55.722 | 99.00th=[20841], 99.50th=[20841], 99.90th=[21103], 99.95th=[21103], 00:18:55.722 | 99.99th=[21103] 00:18:55.722 bw ( KiB/s): min=14088, max=14584, per=25.15%, avg=14336.00, stdev=350.72, samples=2 00:18:55.722 iops : min= 3522, max= 3646, avg=3584.00, stdev=87.68, samples=2 00:18:55.722 lat (usec) : 500=0.01% 00:18:55.722 lat (msec) : 4=0.09%, 10=0.83%, 20=90.96%, 50=8.11% 00:18:55.722 cpu : usr=3.79%, sys=10.48%, ctx=219, majf=0, minf=3 00:18:55.722 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:55.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:55.722 issued rwts: total=3393,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:55.722 00:18:55.722 Run status group 0 (all jobs): 00:18:55.722 READ: bw=51.6MiB/s (54.1MB/s), 10.4MiB/s-15.3MiB/s (10.9MB/s-16.0MB/s), io=51.9MiB (54.4MB), run=1002-1006msec 00:18:55.722 WRITE: bw=55.7MiB/s (58.4MB/s), 12.0MiB/s-16.0MiB/s (12.5MB/s-16.7MB/s), io=56.0MiB (58.7MB), run=1002-1006msec 00:18:55.722 00:18:55.722 Disk stats (read/write): 00:18:55.722 nvme0n1: ios=2413/2560, merge=0/0, ticks=17380/16748, in_queue=34128, util=87.98% 00:18:55.722 nvme0n2: ios=3371/3584, merge=0/0, ticks=26570/23210, in_queue=49780, util=88.68% 00:18:55.722 nvme0n3: ios=2816/3072, merge=0/0, ticks=12389/12473, in_queue=24862, util=88.66% 00:18:55.722 nvme0n4: ios=2912/3072, merge=0/0, ticks=12304/12397, in_queue=24701, util=89.73% 00:18:55.722 19:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:55.722 [global] 00:18:55.722 thread=1 00:18:55.722 invalidate=1 00:18:55.722 rw=randwrite 00:18:55.722 time_based=1 00:18:55.722 runtime=1 00:18:55.722 ioengine=libaio 00:18:55.722 direct=1 00:18:55.722 bs=4096 00:18:55.722 iodepth=128 00:18:55.722 norandommap=0 00:18:55.722 numjobs=1 00:18:55.722 00:18:55.722 verify_dump=1 00:18:55.722 verify_backlog=512 00:18:55.722 verify_state_save=0 00:18:55.722 do_verify=1 00:18:55.722 verify=crc32c-intel 00:18:55.722 [job0] 00:18:55.722 filename=/dev/nvme0n1 00:18:55.722 [job1] 00:18:55.722 filename=/dev/nvme0n2 00:18:55.722 [job2] 00:18:55.722 filename=/dev/nvme0n3 00:18:55.722 [job3] 00:18:55.722 filename=/dev/nvme0n4 00:18:55.722 Could not set queue depth (nvme0n1) 00:18:55.722 Could not set queue depth (nvme0n2) 00:18:55.722 Could not set queue depth (nvme0n3) 00:18:55.722 Could not set queue depth (nvme0n4) 00:18:55.722 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:55.722 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:55.722 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:55.722 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:55.722 fio-3.35 00:18:55.722 Starting 4 threads 00:18:57.099 00:18:57.099 job0: (groupid=0, jobs=1): err= 0: pid=66719: Thu Oct 17 19:21:06 2024 00:18:57.099 read: IOPS=2018, BW=8075KiB/s (8269kB/s)(8164KiB/1011msec) 00:18:57.099 slat (usec): min=8, max=18378, avg=237.37, stdev=1660.12 00:18:57.099 clat (usec): min=1811, max=52447, avg=31624.52, stdev=5305.32 00:18:57.099 lat (usec): min=9281, max=62955, avg=31861.89, stdev=5325.16 00:18:57.099 clat percentiles (usec): 00:18:57.099 | 1.00th=[ 9765], 5.00th=[20579], 10.00th=[27919], 20.00th=[30540], 00:18:57.099 | 30.00th=[31065], 40.00th=[31589], 50.00th=[32113], 60.00th=[32637], 00:18:57.099 | 70.00th=[33817], 80.00th=[34866], 90.00th=[35390], 95.00th=[35914], 00:18:57.099 | 99.00th=[47973], 99.50th=[51119], 99.90th=[52167], 99.95th=[52691], 00:18:57.099 | 99.99th=[52691] 00:18:57.099 write: IOPS=2025, BW=8103KiB/s (8297kB/s)(8192KiB/1011msec); 0 zone resets 00:18:57.099 slat (usec): min=6, max=29127, avg=245.71, stdev=1746.24 00:18:57.099 clat (usec): min=14866, max=47681, avg=30961.53, stdev=4119.14 00:18:57.099 lat (usec): min=19481, max=47706, avg=31207.24, stdev=3826.25 00:18:57.099 clat percentiles (usec): 00:18:57.099 | 1.00th=[18220], 5.00th=[25297], 10.00th=[27657], 20.00th=[28705], 00:18:57.099 | 30.00th=[29492], 40.00th=[30278], 50.00th=[31065], 60.00th=[31589], 00:18:57.099 | 70.00th=[31851], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:18:57.099 | 99.00th=[46924], 99.50th=[47449], 99.90th=[47449], 99.95th=[47449], 00:18:57.099 | 99.99th=[47449] 00:18:57.099 bw ( KiB/s): min= 8192, max= 8192, per=17.88%, avg=8192.00, stdev= 0.00, samples=2 00:18:57.099 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:18:57.099 lat (msec) : 2=0.02%, 10=0.81%, 20=2.25%, 50=96.53%, 100=0.39% 00:18:57.099 cpu : usr=2.48%, sys=6.04%, ctx=90, majf=0, minf=7 00:18:57.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:18:57.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:57.099 issued rwts: total=2041,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.099 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:57.099 job1: (groupid=0, jobs=1): err= 0: pid=66720: Thu Oct 17 19:21:06 2024 00:18:57.099 read: IOPS=3173, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1008msec) 00:18:57.099 slat (usec): min=7, max=10457, avg=144.31, stdev=964.59 00:18:57.099 clat (usec): min=1626, max=31836, avg=19705.70, stdev=2509.27 00:18:57.099 lat (usec): min=11065, max=39138, avg=19850.00, stdev=2537.86 00:18:57.099 clat percentiles (usec): 00:18:57.099 | 1.00th=[11863], 5.00th=[13042], 10.00th=[18220], 20.00th=[19006], 00:18:57.099 | 30.00th=[19268], 40.00th=[19530], 50.00th=[19792], 60.00th=[20055], 00:18:57.099 | 70.00th=[20579], 80.00th=[20841], 90.00th=[21627], 95.00th=[22414], 00:18:57.099 | 99.00th=[30540], 99.50th=[31065], 99.90th=[31851], 99.95th=[31851], 00:18:57.099 | 99.99th=[31851] 00:18:57.099 write: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec); 0 zone resets 00:18:57.099 slat (usec): min=9, max=13746, avg=143.61, stdev=919.12 00:18:57.099 clat (usec): min=9186, max=25583, avg=18042.85, stdev=1998.05 00:18:57.099 lat (usec): min=11930, max=25788, avg=18186.47, stdev=1826.30 00:18:57.099 clat percentiles (usec): 00:18:57.100 | 1.00th=[11469], 5.00th=[15401], 10.00th=[16188], 20.00th=[16712], 00:18:57.100 | 30.00th=[17171], 40.00th=[17695], 50.00th=[17957], 60.00th=[18220], 00:18:57.100 | 70.00th=[19006], 80.00th=[19530], 90.00th=[20055], 95.00th=[20579], 00:18:57.100 | 99.00th=[24773], 99.50th=[24773], 99.90th=[25560], 99.95th=[25560], 00:18:57.100 | 99.99th=[25560] 00:18:57.100 bw ( KiB/s): min=13816, max=14848, per=31.27%, avg=14332.00, stdev=729.73, samples=2 00:18:57.100 iops : min= 3454, max= 3712, avg=3583.00, stdev=182.43, samples=2 00:18:57.100 lat (msec) : 2=0.01%, 10=0.12%, 20=75.53%, 50=24.34% 00:18:57.100 cpu : usr=3.38%, sys=10.03%, ctx=155, majf=0, minf=8 00:18:57.100 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:57.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:57.100 issued rwts: total=3199,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.100 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:57.100 job2: (groupid=0, jobs=1): err= 0: pid=66721: Thu Oct 17 19:21:06 2024 00:18:57.100 read: IOPS=1993, BW=7972KiB/s (8164kB/s)(8052KiB/1010msec) 00:18:57.100 slat (usec): min=8, max=18356, avg=236.83, stdev=1664.55 00:18:57.100 clat (usec): min=9158, max=52444, avg=32001.55, stdev=4807.65 00:18:57.100 lat (usec): min=9172, max=62989, avg=32238.39, stdev=4828.49 00:18:57.100 clat percentiles (usec): 00:18:57.100 | 1.00th=[16712], 5.00th=[22676], 10.00th=[29230], 20.00th=[30540], 00:18:57.100 | 30.00th=[31327], 40.00th=[31589], 50.00th=[32113], 60.00th=[32900], 00:18:57.100 | 70.00th=[33817], 80.00th=[34866], 90.00th=[35390], 95.00th=[36439], 00:18:57.100 | 99.00th=[47973], 99.50th=[51119], 99.90th=[52691], 99.95th=[52691], 00:18:57.100 | 99.99th=[52691] 00:18:57.100 write: IOPS=2027, BW=8111KiB/s (8306kB/s)(8192KiB/1010msec); 0 zone resets 00:18:57.100 slat (usec): min=8, max=28741, avg=246.25, stdev=1751.76 00:18:57.100 clat (usec): min=14892, max=47105, avg=30990.50, stdev=4080.65 00:18:57.100 lat (usec): min=19805, max=47131, avg=31236.75, stdev=3781.94 00:18:57.100 clat percentiles (usec): 00:18:57.100 | 1.00th=[18220], 5.00th=[25297], 10.00th=[27395], 20.00th=[28705], 00:18:57.100 | 30.00th=[29492], 40.00th=[30278], 50.00th=[31065], 60.00th=[31589], 00:18:57.100 | 70.00th=[31851], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:18:57.100 | 99.00th=[46924], 99.50th=[46924], 99.90th=[46924], 99.95th=[46924], 00:18:57.100 | 99.99th=[46924] 00:18:57.100 bw ( KiB/s): min= 8175, max= 8192, per=17.86%, avg=8183.50, stdev=12.02, samples=2 00:18:57.100 iops : min= 2043, max= 2048, avg=2045.50, stdev= 3.54, samples=2 00:18:57.100 lat (msec) : 10=0.47%, 20=1.95%, 50=97.19%, 100=0.39% 00:18:57.100 cpu : usr=2.08%, sys=6.34%, ctx=90, majf=0, minf=11 00:18:57.100 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:18:57.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:57.100 issued rwts: total=2013,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.100 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:57.100 job3: (groupid=0, jobs=1): err= 0: pid=66722: Thu Oct 17 19:21:06 2024 00:18:57.100 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:18:57.100 slat (usec): min=9, max=9223, avg=129.70, stdev=854.67 00:18:57.100 clat (usec): min=9556, max=29752, avg=17920.71, stdev=2132.87 00:18:57.100 lat (usec): min=9569, max=35704, avg=18050.41, stdev=2175.42 00:18:57.100 clat percentiles (usec): 00:18:57.100 | 1.00th=[11076], 5.00th=[15664], 10.00th=[16188], 20.00th=[16909], 00:18:57.100 | 30.00th=[17433], 40.00th=[17433], 50.00th=[17695], 60.00th=[18482], 00:18:57.100 | 70.00th=[18744], 80.00th=[19268], 90.00th=[19530], 95.00th=[20317], 00:18:57.100 | 99.00th=[26346], 99.50th=[27395], 99.90th=[29754], 99.95th=[29754], 00:18:57.100 | 99.99th=[29754] 00:18:57.100 write: IOPS=3879, BW=15.2MiB/s (15.9MB/s)(15.2MiB/1006msec); 0 zone resets 00:18:57.100 slat (usec): min=7, max=15449, avg=129.79, stdev=824.66 00:18:57.100 clat (usec): min=805, max=26359, avg=16189.87, stdev=2330.09 00:18:57.100 lat (usec): min=7348, max=26386, avg=16319.66, stdev=2217.19 00:18:57.100 clat percentiles (usec): 00:18:57.100 | 1.00th=[ 8291], 5.00th=[13566], 10.00th=[14091], 20.00th=[15008], 00:18:57.100 | 30.00th=[15401], 40.00th=[15795], 50.00th=[16319], 60.00th=[16450], 00:18:57.100 | 70.00th=[16909], 80.00th=[17433], 90.00th=[18220], 95.00th=[18744], 00:18:57.100 | 99.00th=[26084], 99.50th=[26084], 99.90th=[26346], 99.95th=[26346], 00:18:57.100 | 99.99th=[26346] 00:18:57.100 bw ( KiB/s): min=14336, max=15864, per=32.95%, avg=15100.00, stdev=1080.46, samples=2 00:18:57.100 iops : min= 3584, max= 3966, avg=3775.00, stdev=270.11, samples=2 00:18:57.100 lat (usec) : 1000=0.01% 00:18:57.100 lat (msec) : 10=1.40%, 20=93.64%, 50=4.94% 00:18:57.100 cpu : usr=3.18%, sys=11.54%, ctx=160, majf=0, minf=12 00:18:57.100 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:57.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:57.100 issued rwts: total=3584,3903,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.100 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:57.100 00:18:57.100 Run status group 0 (all jobs): 00:18:57.100 READ: bw=41.9MiB/s (43.9MB/s), 7972KiB/s-13.9MiB/s (8164kB/s-14.6MB/s), io=42.3MiB (44.4MB), run=1006-1011msec 00:18:57.100 WRITE: bw=44.8MiB/s (46.9MB/s), 8103KiB/s-15.2MiB/s (8297kB/s-15.9MB/s), io=45.2MiB (47.4MB), run=1006-1011msec 00:18:57.100 00:18:57.100 Disk stats (read/write): 00:18:57.100 nvme0n1: ios=1586/1856, merge=0/0, ticks=48829/54803, in_queue=103632, util=88.77% 00:18:57.100 nvme0n2: ios=2735/3072, merge=0/0, ticks=51389/51983, in_queue=103372, util=89.49% 00:18:57.100 nvme0n3: ios=1563/1856, merge=0/0, ticks=48810/54911, in_queue=103721, util=89.95% 00:18:57.100 nvme0n4: ios=3099/3272, merge=0/0, ticks=52703/50129, in_queue=102832, util=90.93% 00:18:57.100 19:21:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:57.100 19:21:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66735 00:18:57.100 19:21:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:57.100 19:21:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:57.100 [global] 00:18:57.100 thread=1 00:18:57.100 invalidate=1 00:18:57.100 rw=read 00:18:57.100 time_based=1 00:18:57.100 runtime=10 00:18:57.100 ioengine=libaio 00:18:57.100 direct=1 00:18:57.100 bs=4096 00:18:57.100 iodepth=1 00:18:57.100 norandommap=1 00:18:57.100 numjobs=1 00:18:57.100 00:18:57.100 [job0] 00:18:57.100 filename=/dev/nvme0n1 00:18:57.100 [job1] 00:18:57.100 filename=/dev/nvme0n2 00:18:57.100 [job2] 00:18:57.100 filename=/dev/nvme0n3 00:18:57.100 [job3] 00:18:57.100 filename=/dev/nvme0n4 00:18:57.100 Could not set queue depth (nvme0n1) 00:18:57.100 Could not set queue depth (nvme0n2) 00:18:57.100 Could not set queue depth (nvme0n3) 00:18:57.100 Could not set queue depth (nvme0n4) 00:18:57.358 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:57.358 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:57.358 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:57.358 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:57.358 fio-3.35 00:18:57.358 Starting 4 threads 00:19:00.641 19:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:00.641 fio: pid=66783, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:00.641 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=33693696, buflen=4096 00:19:00.641 19:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:00.641 fio: pid=66781, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:00.641 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=39514112, buflen=4096 00:19:00.899 19:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:00.899 19:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:01.157 fio: pid=66779, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:01.157 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=44912640, buflen=4096 00:19:01.157 19:21:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:01.157 19:21:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:01.414 fio: pid=66780, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:01.415 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=52129792, buflen=4096 00:19:01.415 00:19:01.415 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66779: Thu Oct 17 19:21:10 2024 00:19:01.415 read: IOPS=2974, BW=11.6MiB/s (12.2MB/s)(42.8MiB/3687msec) 00:19:01.415 slat (usec): min=7, max=9956, avg=20.45, stdev=165.77 00:19:01.415 clat (usec): min=129, max=3538, avg=313.86, stdev=77.02 00:19:01.415 lat (usec): min=142, max=10487, avg=334.31, stdev=185.94 00:19:01.415 clat percentiles (usec): 00:19:01.415 | 1.00th=[ 169], 5.00th=[ 227], 10.00th=[ 245], 20.00th=[ 273], 00:19:01.415 | 30.00th=[ 289], 40.00th=[ 302], 50.00th=[ 314], 60.00th=[ 322], 00:19:01.415 | 70.00th=[ 334], 80.00th=[ 351], 90.00th=[ 379], 95.00th=[ 408], 00:19:01.415 | 99.00th=[ 449], 99.50th=[ 469], 99.90th=[ 816], 99.95th=[ 1270], 00:19:01.415 | 99.99th=[ 3261] 00:19:01.415 bw ( KiB/s): min=10448, max=13294, per=28.57%, avg=11800.86, stdev=963.42, samples=7 00:19:01.415 iops : min= 2612, max= 3323, avg=2950.14, stdev=240.73, samples=7 00:19:01.415 lat (usec) : 250=11.72%, 500=88.00%, 750=0.16%, 1000=0.05% 00:19:01.415 lat (msec) : 2=0.04%, 4=0.03% 00:19:01.415 cpu : usr=1.14%, sys=4.67%, ctx=10992, majf=0, minf=1 00:19:01.415 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.415 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.415 issued rwts: total=10966,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.415 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:01.415 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66780: Thu Oct 17 19:21:10 2024 00:19:01.415 read: IOPS=3161, BW=12.3MiB/s (12.9MB/s)(49.7MiB/4026msec) 00:19:01.415 slat (usec): min=7, max=15688, avg=19.89, stdev=233.77 00:19:01.415 clat (usec): min=89, max=23902, avg=295.00, stdev=286.70 00:19:01.415 lat (usec): min=143, max=23935, avg=314.89, stdev=375.56 00:19:01.415 clat percentiles (usec): 00:19:01.415 | 1.00th=[ 145], 5.00th=[ 159], 10.00th=[ 169], 20.00th=[ 188], 00:19:01.415 | 30.00th=[ 204], 40.00th=[ 221], 50.00th=[ 243], 60.00th=[ 330], 00:19:01.415 | 70.00th=[ 379], 80.00th=[ 404], 90.00th=[ 433], 95.00th=[ 453], 00:19:01.415 | 99.00th=[ 494], 99.50th=[ 537], 99.90th=[ 2409], 99.95th=[ 3752], 00:19:01.415 | 99.99th=[12780] 00:19:01.415 bw ( KiB/s): min= 9272, max=17344, per=29.23%, avg=12070.71, stdev=3739.81, samples=7 00:19:01.415 iops : min= 2318, max= 4336, avg=3017.57, stdev=934.93, samples=7 00:19:01.415 lat (usec) : 100=0.01%, 250=51.59%, 500=47.55%, 750=0.57%, 1000=0.08% 00:19:01.415 lat (msec) : 2=0.07%, 4=0.09%, 10=0.02%, 20=0.01%, 50=0.01% 00:19:01.415 cpu : usr=0.92%, sys=4.47%, ctx=12739, majf=0, minf=1 00:19:01.415 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.415 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.415 issued rwts: total=12728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.415 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:01.415 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66781: Thu Oct 17 19:21:10 2024 00:19:01.415 read: IOPS=2878, BW=11.2MiB/s (11.8MB/s)(37.7MiB/3352msec) 00:19:01.415 slat (usec): min=9, max=7655, avg=19.54, stdev=99.60 00:19:01.415 clat (usec): min=3, max=1996, avg=326.05, stdev=55.19 00:19:01.415 lat (usec): min=197, max=7989, avg=345.59, stdev=114.33 00:19:01.415 clat percentiles (usec): 00:19:01.415 | 1.00th=[ 243], 5.00th=[ 265], 10.00th=[ 277], 20.00th=[ 289], 00:19:01.415 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 326], 00:19:01.415 | 70.00th=[ 343], 80.00th=[ 359], 90.00th=[ 388], 95.00th=[ 416], 00:19:01.415 | 99.00th=[ 478], 99.50th=[ 502], 99.90th=[ 693], 99.95th=[ 947], 00:19:01.415 | 99.99th=[ 1991] 00:19:01.415 bw ( KiB/s): min=10488, max=12512, per=27.92%, avg=11530.67, stdev=714.68, samples=6 00:19:01.415 iops : min= 2622, max= 3128, avg=2882.67, stdev=178.67, samples=6 00:19:01.415 lat (usec) : 4=0.01%, 250=1.71%, 500=97.72%, 750=0.49%, 1000=0.02% 00:19:01.415 lat (msec) : 2=0.04% 00:19:01.415 cpu : usr=1.37%, sys=4.51%, ctx=9656, majf=0, minf=1 00:19:01.415 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.415 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.415 issued rwts: total=9648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.415 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:01.415 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66783: Thu Oct 17 19:21:10 2024 00:19:01.415 read: IOPS=2731, BW=10.7MiB/s (11.2MB/s)(32.1MiB/3012msec) 00:19:01.415 slat (usec): min=8, max=113, avg=19.78, stdev= 6.37 00:19:01.415 clat (usec): min=223, max=2509, avg=344.40, stdev=77.80 00:19:01.415 lat (usec): min=238, max=2533, avg=364.18, stdev=80.13 00:19:01.415 clat percentiles (usec): 00:19:01.415 | 1.00th=[ 241], 5.00th=[ 253], 10.00th=[ 262], 20.00th=[ 273], 00:19:01.415 | 30.00th=[ 285], 40.00th=[ 302], 50.00th=[ 343], 60.00th=[ 375], 00:19:01.415 | 70.00th=[ 392], 80.00th=[ 416], 90.00th=[ 437], 95.00th=[ 453], 00:19:01.415 | 99.00th=[ 482], 99.50th=[ 502], 99.90th=[ 848], 99.95th=[ 963], 00:19:01.415 | 99.99th=[ 2507] 00:19:01.415 bw ( KiB/s): min= 9272, max=13160, per=26.50%, avg=10942.67, stdev=1884.16, samples=6 00:19:01.415 iops : min= 2318, max= 3290, avg=2735.67, stdev=471.04, samples=6 00:19:01.415 lat (usec) : 250=3.87%, 500=95.56%, 750=0.43%, 1000=0.09% 00:19:01.415 lat (msec) : 2=0.04%, 4=0.01% 00:19:01.415 cpu : usr=1.10%, sys=4.85%, ctx=8227, majf=0, minf=1 00:19:01.415 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.415 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.415 issued rwts: total=8227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.415 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:01.415 00:19:01.415 Run status group 0 (all jobs): 00:19:01.415 READ: bw=40.3MiB/s (42.3MB/s), 10.7MiB/s-12.3MiB/s (11.2MB/s-12.9MB/s), io=162MiB (170MB), run=3012-4026msec 00:19:01.415 00:19:01.415 Disk stats (read/write): 00:19:01.415 nvme0n1: ios=10697/0, merge=0/0, ticks=3358/0, in_queue=3358, util=95.65% 00:19:01.415 nvme0n2: ios=11856/0, merge=0/0, ticks=3465/0, in_queue=3465, util=95.44% 00:19:01.415 nvme0n3: ios=8921/0, merge=0/0, ticks=2927/0, in_queue=2927, util=96.40% 00:19:01.415 nvme0n4: ios=7726/0, merge=0/0, ticks=2667/0, in_queue=2667, util=96.66% 00:19:01.415 19:21:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:01.415 19:21:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:01.674 19:21:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:01.674 19:21:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:02.240 19:21:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:02.240 19:21:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:02.498 19:21:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:02.498 19:21:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:02.756 19:21:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:02.756 19:21:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:03.015 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:03.015 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66735 00:19:03.015 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:03.015 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:03.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:03.015 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:03.015 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:19:03.015 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:03.015 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:03.015 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:03.015 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:03.015 nvmf hotplug test: fio failed as expected 00:19:03.015 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:19:03.015 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:03.015 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:03.015 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:03.274 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:03.274 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:03.274 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:03.274 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:03.274 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:03.274 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:03.274 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:19:03.274 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:03.274 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:19:03.274 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:03.274 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:03.274 rmmod nvme_tcp 00:19:03.274 rmmod nvme_fabrics 00:19:03.274 rmmod nvme_keyring 00:19:03.274 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:03.533 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:19:03.533 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:19:03.533 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 66353 ']' 00:19:03.533 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 66353 00:19:03.533 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 66353 ']' 00:19:03.533 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 66353 00:19:03.533 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:19:03.533 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:03.533 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66353 00:19:03.533 killing process with pid 66353 00:19:03.533 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:03.533 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:03.533 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66353' 00:19:03.533 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 66353 00:19:03.533 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 66353 00:19:03.533 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:03.533 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:03.533 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:03.533 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:19:03.533 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:03.533 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:19:03.533 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:19:03.792 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:03.792 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:03.792 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:03.792 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:03.792 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:03.792 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:03.792 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:03.792 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:03.792 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:03.792 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:03.792 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:03.792 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:03.792 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:03.792 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:03.792 19:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:03.792 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:03.792 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.792 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:03.792 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.792 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:19:03.792 00:19:03.792 real 0m20.276s 00:19:03.792 user 1m17.388s 00:19:03.792 sys 0m9.029s 00:19:03.792 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:03.792 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.792 ************************************ 00:19:03.792 END TEST nvmf_fio_target 00:19:03.792 ************************************ 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:19:04.051 ************************************ 00:19:04.051 START TEST nvmf_bdevio 00:19:04.051 ************************************ 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:04.051 * Looking for test storage... 00:19:04.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:04.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.051 --rc genhtml_branch_coverage=1 00:19:04.051 --rc genhtml_function_coverage=1 00:19:04.051 --rc genhtml_legend=1 00:19:04.051 --rc geninfo_all_blocks=1 00:19:04.051 --rc geninfo_unexecuted_blocks=1 00:19:04.051 00:19:04.051 ' 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:04.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.051 --rc genhtml_branch_coverage=1 00:19:04.051 --rc genhtml_function_coverage=1 00:19:04.051 --rc genhtml_legend=1 00:19:04.051 --rc geninfo_all_blocks=1 00:19:04.051 --rc geninfo_unexecuted_blocks=1 00:19:04.051 00:19:04.051 ' 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:04.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.051 --rc genhtml_branch_coverage=1 00:19:04.051 --rc genhtml_function_coverage=1 00:19:04.051 --rc genhtml_legend=1 00:19:04.051 --rc geninfo_all_blocks=1 00:19:04.051 --rc geninfo_unexecuted_blocks=1 00:19:04.051 00:19:04.051 ' 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:04.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.051 --rc genhtml_branch_coverage=1 00:19:04.051 --rc genhtml_function_coverage=1 00:19:04.051 --rc genhtml_legend=1 00:19:04.051 --rc geninfo_all_blocks=1 00:19:04.051 --rc geninfo_unexecuted_blocks=1 00:19:04.051 00:19:04.051 ' 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:04.051 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:04.311 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # nvmf_veth_init 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:04.311 Cannot find device "nvmf_init_br" 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:04.311 Cannot find device "nvmf_init_br2" 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:04.311 Cannot find device "nvmf_tgt_br" 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:04.311 Cannot find device "nvmf_tgt_br2" 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:04.311 Cannot find device "nvmf_init_br" 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:04.311 Cannot find device "nvmf_init_br2" 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:04.311 Cannot find device "nvmf_tgt_br" 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:04.311 Cannot find device "nvmf_tgt_br2" 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:04.311 Cannot find device "nvmf_br" 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:04.311 Cannot find device "nvmf_init_if" 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:19:04.311 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:04.311 Cannot find device "nvmf_init_if2" 00:19:04.312 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:19:04.312 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:04.312 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:04.312 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:19:04.312 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:04.312 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:04.312 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:19:04.312 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:04.312 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:04.312 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:04.312 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:04.312 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:04.312 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:04.312 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:04.312 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:04.312 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:04.312 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:04.312 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:04.312 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:04.312 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:04.312 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:04.312 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:04.570 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:04.570 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.109 ms 00:19:04.570 00:19:04.570 --- 10.0.0.3 ping statistics --- 00:19:04.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.570 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:04.570 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:04.570 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:19:04.570 00:19:04.570 --- 10.0.0.4 ping statistics --- 00:19:04.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.570 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:04.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:04.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:19:04.570 00:19:04.570 --- 10.0.0.1 ping statistics --- 00:19:04.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.570 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:04.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:04.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:19:04.570 00:19:04.570 --- 10.0.0.2 ping statistics --- 00:19:04.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.570 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # return 0 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=67102 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 67102 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 67102 ']' 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:04.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:04.570 19:21:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:04.570 [2024-10-17 19:21:13.766425] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:19:04.570 [2024-10-17 19:21:13.766531] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.829 [2024-10-17 19:21:13.900639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:04.829 [2024-10-17 19:21:13.984824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.829 [2024-10-17 19:21:13.984881] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.829 [2024-10-17 19:21:13.984892] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:04.829 [2024-10-17 19:21:13.984902] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:04.829 [2024-10-17 19:21:13.984909] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.829 [2024-10-17 19:21:13.986810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:04.829 [2024-10-17 19:21:13.986945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:04.829 [2024-10-17 19:21:13.987211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:04.829 [2024-10-17 19:21:13.987213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:04.829 [2024-10-17 19:21:14.066123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:05.088 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:05.088 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:19:05.088 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:05.088 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:05.088 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:05.088 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.088 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:05.088 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.088 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:05.088 [2024-10-17 19:21:14.189027] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:05.088 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.088 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:05.088 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.088 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:05.088 Malloc0 00:19:05.088 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.088 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:05.088 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.088 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:05.088 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.088 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:05.088 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.088 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:05.088 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.089 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:05.089 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.089 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:05.089 [2024-10-17 19:21:14.259672] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:05.089 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.089 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:05.089 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:05.089 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:19:05.089 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:19:05.089 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:05.089 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:05.089 { 00:19:05.089 "params": { 00:19:05.089 "name": "Nvme$subsystem", 00:19:05.089 "trtype": "$TEST_TRANSPORT", 00:19:05.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:05.089 "adrfam": "ipv4", 00:19:05.089 "trsvcid": "$NVMF_PORT", 00:19:05.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:05.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:05.089 "hdgst": ${hdgst:-false}, 00:19:05.089 "ddgst": ${ddgst:-false} 00:19:05.089 }, 00:19:05.089 "method": "bdev_nvme_attach_controller" 00:19:05.089 } 00:19:05.089 EOF 00:19:05.089 )") 00:19:05.089 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:19:05.089 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:19:05.089 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:19:05.089 19:21:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:19:05.089 "params": { 00:19:05.089 "name": "Nvme1", 00:19:05.089 "trtype": "tcp", 00:19:05.089 "traddr": "10.0.0.3", 00:19:05.089 "adrfam": "ipv4", 00:19:05.089 "trsvcid": "4420", 00:19:05.089 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.089 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:05.089 "hdgst": false, 00:19:05.089 "ddgst": false 00:19:05.089 }, 00:19:05.089 "method": "bdev_nvme_attach_controller" 00:19:05.089 }' 00:19:05.089 [2024-10-17 19:21:14.318843] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:19:05.089 [2024-10-17 19:21:14.318948] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67136 ] 00:19:05.347 [2024-10-17 19:21:14.455957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:05.347 [2024-10-17 19:21:14.545319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.347 [2024-10-17 19:21:14.545500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.347 [2024-10-17 19:21:14.545504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.606 [2024-10-17 19:21:14.633771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:05.606 I/O targets: 00:19:05.606 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:05.606 00:19:05.606 00:19:05.606 CUnit - A unit testing framework for C - Version 2.1-3 00:19:05.606 http://cunit.sourceforge.net/ 00:19:05.606 00:19:05.606 00:19:05.606 Suite: bdevio tests on: Nvme1n1 00:19:05.606 Test: blockdev write read block ...passed 00:19:05.606 Test: blockdev write zeroes read block ...passed 00:19:05.606 Test: blockdev write zeroes read no split ...passed 00:19:05.606 Test: blockdev write zeroes read split ...passed 00:19:05.606 Test: blockdev write zeroes read split partial ...passed 00:19:05.606 Test: blockdev reset ...[2024-10-17 19:21:14.805389] nvme_ctrlr.c:1770:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:05.606 [2024-10-17 19:21:14.805521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2023220 (9): Bad file descriptor 00:19:05.606 [2024-10-17 19:21:14.826092] bdev_nvme.c:2215:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:05.606 passed 00:19:05.606 Test: blockdev write read 8 blocks ...passed 00:19:05.606 Test: blockdev write read size > 128k ...passed 00:19:05.606 Test: blockdev write read invalid size ...passed 00:19:05.606 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:05.606 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:05.606 Test: blockdev write read max offset ...passed 00:19:05.606 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:05.606 Test: blockdev writev readv 8 blocks ...passed 00:19:05.606 Test: blockdev writev readv 30 x 1block ...passed 00:19:05.606 Test: blockdev writev readv block ...passed 00:19:05.606 Test: blockdev writev readv size > 128k ...passed 00:19:05.606 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:05.606 Test: blockdev comparev and writev ...[2024-10-17 19:21:14.836657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:05.607 [2024-10-17 19:21:14.836975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.607 [2024-10-17 19:21:14.837170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:05.607 [2024-10-17 19:21:14.837199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:05.607 [2024-10-17 19:21:14.837814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:05.607 [2024-10-17 19:21:14.837857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:05.607 [2024-10-17 19:21:14.837881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:05.607 [2024-10-17 19:21:14.837895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:05.607 [2024-10-17 19:21:14.838377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:05.607 [2024-10-17 19:21:14.838412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:05.607 [2024-10-17 19:21:14.838434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:05.607 [2024-10-17 19:21:14.838447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:05.607 [2024-10-17 19:21:14.838869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:05.607 [2024-10-17 19:21:14.838903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:05.607 [2024-10-17 19:21:14.838926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:05.607 [2024-10-17 19:21:14.838938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:05.607 passed 00:19:05.607 Test: blockdev nvme passthru rw ...passed 00:19:05.607 Test: blockdev nvme passthru vendor specific ...[2024-10-17 19:21:14.839911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:05.607 [2024-10-17 19:21:14.839942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:05.607 [2024-10-17 19:21:14.840070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:05.607 [2024-10-17 19:21:14.840095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:05.607 [2024-10-17 19:21:14.840240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:05.607 [2024-10-17 19:21:14.840267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:05.607 [2024-10-17 19:21:14.840389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:05.607 [2024-10-17 19:21:14.840422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:05.607 passed 00:19:05.607 Test: blockdev nvme admin passthru ...passed 00:19:05.607 Test: blockdev copy ...passed 00:19:05.607 00:19:05.607 Run Summary: Type Total Ran Passed Failed Inactive 00:19:05.607 suites 1 1 n/a 0 0 00:19:05.607 tests 23 23 23 0 0 00:19:05.607 asserts 152 152 152 0 n/a 00:19:05.607 00:19:05.607 Elapsed time = 0.174 seconds 00:19:05.865 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:05.865 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.865 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:06.123 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.123 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:06.123 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:06.123 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:06.123 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:19:06.123 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:06.123 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:19:06.123 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:06.123 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:06.123 rmmod nvme_tcp 00:19:06.123 rmmod nvme_fabrics 00:19:06.123 rmmod nvme_keyring 00:19:06.123 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:06.123 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:19:06.123 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:19:06.123 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 67102 ']' 00:19:06.124 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 67102 00:19:06.124 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 67102 ']' 00:19:06.124 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 67102 00:19:06.124 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:19:06.124 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:06.124 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67102 00:19:06.124 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:19:06.124 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:19:06.124 killing process with pid 67102 00:19:06.124 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67102' 00:19:06.124 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 67102 00:19:06.124 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 67102 00:19:06.382 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:06.382 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:06.382 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:06.382 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:19:06.382 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:19:06.382 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:06.382 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:19:06.382 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:06.382 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:06.382 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:06.382 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:06.641 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:06.641 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:06.641 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:06.641 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:06.641 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:06.641 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:06.641 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:06.641 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:06.641 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:06.641 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:06.641 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:06.641 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:06.641 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.641 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:06.641 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.641 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:19:06.641 00:19:06.641 real 0m2.741s 00:19:06.641 user 0m7.861s 00:19:06.641 sys 0m0.939s 00:19:06.641 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:06.641 19:21:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:06.641 ************************************ 00:19:06.641 END TEST nvmf_bdevio 00:19:06.641 ************************************ 00:19:06.641 19:21:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:06.641 ************************************ 00:19:06.641 END TEST nvmf_target_core 00:19:06.641 ************************************ 00:19:06.641 00:19:06.641 real 2m37.947s 00:19:06.641 user 6m53.584s 00:19:06.641 sys 0m51.680s 00:19:06.641 19:21:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:06.641 19:21:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:19:06.900 19:21:15 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:19:06.900 19:21:15 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:06.900 19:21:15 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:06.900 19:21:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:06.900 ************************************ 00:19:06.900 START TEST nvmf_target_extra 00:19:06.900 ************************************ 00:19:06.900 19:21:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:19:06.900 * Looking for test storage... 00:19:06.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:06.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.900 --rc genhtml_branch_coverage=1 00:19:06.900 --rc genhtml_function_coverage=1 00:19:06.900 --rc genhtml_legend=1 00:19:06.900 --rc geninfo_all_blocks=1 00:19:06.900 --rc geninfo_unexecuted_blocks=1 00:19:06.900 00:19:06.900 ' 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:06.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.900 --rc genhtml_branch_coverage=1 00:19:06.900 --rc genhtml_function_coverage=1 00:19:06.900 --rc genhtml_legend=1 00:19:06.900 --rc geninfo_all_blocks=1 00:19:06.900 --rc geninfo_unexecuted_blocks=1 00:19:06.900 00:19:06.900 ' 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:06.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.900 --rc genhtml_branch_coverage=1 00:19:06.900 --rc genhtml_function_coverage=1 00:19:06.900 --rc genhtml_legend=1 00:19:06.900 --rc geninfo_all_blocks=1 00:19:06.900 --rc geninfo_unexecuted_blocks=1 00:19:06.900 00:19:06.900 ' 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:06.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.900 --rc genhtml_branch_coverage=1 00:19:06.900 --rc genhtml_function_coverage=1 00:19:06.900 --rc genhtml_legend=1 00:19:06.900 --rc geninfo_all_blocks=1 00:19:06.900 --rc geninfo_unexecuted_blocks=1 00:19:06.900 00:19:06.900 ' 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:06.900 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:06.901 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:06.901 ************************************ 00:19:06.901 START TEST nvmf_auth_target 00:19:06.901 ************************************ 00:19:06.901 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:07.160 * Looking for test storage... 00:19:07.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:07.160 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:07.160 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:19:07.160 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:07.160 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:07.160 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:07.160 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:07.160 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:07.160 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:07.160 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:07.160 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:07.160 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:07.160 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:07.160 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:07.160 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:07.160 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:07.160 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:07.160 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:07.160 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:07.160 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.160 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:07.160 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:07.160 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:07.160 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:07.160 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:07.160 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:07.160 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:07.160 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:07.160 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:07.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.161 --rc genhtml_branch_coverage=1 00:19:07.161 --rc genhtml_function_coverage=1 00:19:07.161 --rc genhtml_legend=1 00:19:07.161 --rc geninfo_all_blocks=1 00:19:07.161 --rc geninfo_unexecuted_blocks=1 00:19:07.161 00:19:07.161 ' 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:07.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.161 --rc genhtml_branch_coverage=1 00:19:07.161 --rc genhtml_function_coverage=1 00:19:07.161 --rc genhtml_legend=1 00:19:07.161 --rc geninfo_all_blocks=1 00:19:07.161 --rc geninfo_unexecuted_blocks=1 00:19:07.161 00:19:07.161 ' 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:07.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.161 --rc genhtml_branch_coverage=1 00:19:07.161 --rc genhtml_function_coverage=1 00:19:07.161 --rc genhtml_legend=1 00:19:07.161 --rc geninfo_all_blocks=1 00:19:07.161 --rc geninfo_unexecuted_blocks=1 00:19:07.161 00:19:07.161 ' 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:07.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.161 --rc genhtml_branch_coverage=1 00:19:07.161 --rc genhtml_function_coverage=1 00:19:07.161 --rc genhtml_legend=1 00:19:07.161 --rc geninfo_all_blocks=1 00:19:07.161 --rc geninfo_unexecuted_blocks=1 00:19:07.161 00:19:07.161 ' 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:07.161 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # nvmf_veth_init 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:07.161 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:07.162 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:07.162 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:07.162 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:07.162 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:07.162 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:07.162 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:07.162 Cannot find device "nvmf_init_br" 00:19:07.162 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:19:07.162 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:07.162 Cannot find device "nvmf_init_br2" 00:19:07.162 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:19:07.162 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:07.162 Cannot find device "nvmf_tgt_br" 00:19:07.162 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:19:07.162 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:07.162 Cannot find device "nvmf_tgt_br2" 00:19:07.162 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:19:07.162 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:07.420 Cannot find device "nvmf_init_br" 00:19:07.420 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:19:07.420 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:07.420 Cannot find device "nvmf_init_br2" 00:19:07.420 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:19:07.420 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:07.420 Cannot find device "nvmf_tgt_br" 00:19:07.420 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:19:07.420 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:07.420 Cannot find device "nvmf_tgt_br2" 00:19:07.420 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:19:07.420 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:07.420 Cannot find device "nvmf_br" 00:19:07.420 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:19:07.421 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:07.421 Cannot find device "nvmf_init_if" 00:19:07.421 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:19:07.421 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:07.421 Cannot find device "nvmf_init_if2" 00:19:07.421 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:19:07.421 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:07.421 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:07.421 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:19:07.421 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:07.421 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:07.421 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:19:07.421 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:07.421 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:07.421 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:07.421 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:07.421 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:07.421 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:07.421 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:07.421 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:07.421 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:07.421 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:07.421 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:07.421 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:07.421 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:07.421 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:07.421 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:07.421 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:07.421 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:07.421 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:07.421 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:07.421 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:07.678 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:07.678 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:07.678 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:07.678 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:07.678 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:07.679 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:07.679 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:19:07.679 00:19:07.679 --- 10.0.0.3 ping statistics --- 00:19:07.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.679 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:07.679 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:07.679 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:19:07.679 00:19:07.679 --- 10.0.0.4 ping statistics --- 00:19:07.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.679 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:07.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:07.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:19:07.679 00:19:07.679 --- 10.0.0.1 ping statistics --- 00:19:07.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.679 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:07.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:07.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:19:07.679 00:19:07.679 --- 10.0.0.2 ping statistics --- 00:19:07.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.679 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # return 0 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=67423 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 67423 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 67423 ']' 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:07.679 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.052 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:09.052 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:09.052 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:09.052 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:09.052 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.052 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.052 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67455 00:19:09.052 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:09.052 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:09.052 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:09.052 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:09.052 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:09.052 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:09.052 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:19:09.052 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:19:09.052 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:09.052 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=9931c13013d73f396a379fa2bfd5dc22e09e3aa40cac3975 00:19:09.052 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:19:09.052 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.Nik 00:19:09.052 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 9931c13013d73f396a379fa2bfd5dc22e09e3aa40cac3975 0 00:19:09.052 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 9931c13013d73f396a379fa2bfd5dc22e09e3aa40cac3975 0 00:19:09.052 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:09.052 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:09.052 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=9931c13013d73f396a379fa2bfd5dc22e09e3aa40cac3975 00:19:09.052 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:19:09.052 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.Nik 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.Nik 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Nik 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=9e4c6d1cb07da9c7fbdc206113581a5ce80ec6569003a91b1d834d7f336b652f 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.twe 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 9e4c6d1cb07da9c7fbdc206113581a5ce80ec6569003a91b1d834d7f336b652f 3 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 9e4c6d1cb07da9c7fbdc206113581a5ce80ec6569003a91b1d834d7f336b652f 3 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=9e4c6d1cb07da9c7fbdc206113581a5ce80ec6569003a91b1d834d7f336b652f 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.twe 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.twe 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.twe 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=cb94aabad119a22a4b12444593a0e9c6 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.c3C 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key cb94aabad119a22a4b12444593a0e9c6 1 00:19:09.052 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 cb94aabad119a22a4b12444593a0e9c6 1 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=cb94aabad119a22a4b12444593a0e9c6 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.c3C 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.c3C 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.c3C 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=056a1d67b8c63c9b40070c17bd0c9fac9fc883ad9db9ef29 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.q9Q 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 056a1d67b8c63c9b40070c17bd0c9fac9fc883ad9db9ef29 2 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 056a1d67b8c63c9b40070c17bd0c9fac9fc883ad9db9ef29 2 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=056a1d67b8c63c9b40070c17bd0c9fac9fc883ad9db9ef29 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.q9Q 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.q9Q 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.q9Q 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=b1b55a42195a6f1e0831be1a82eaaa8d25bbfe9fd48ab34e 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.fRb 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key b1b55a42195a6f1e0831be1a82eaaa8d25bbfe9fd48ab34e 2 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 b1b55a42195a6f1e0831be1a82eaaa8d25bbfe9fd48ab34e 2 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=b1b55a42195a6f1e0831be1a82eaaa8d25bbfe9fd48ab34e 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.fRb 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.fRb 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.fRb 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:19:09.053 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:09.310 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=85dea0cc66b5d7710257f8a9324c2a52 00:19:09.310 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.FBe 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 85dea0cc66b5d7710257f8a9324c2a52 1 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 85dea0cc66b5d7710257f8a9324c2a52 1 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=85dea0cc66b5d7710257f8a9324c2a52 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.FBe 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.FBe 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.FBe 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=68e155761ec26f2c0fa0463625ca68b5a932c0a3949da1fa7ebefa9a61a361c2 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.tNd 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 68e155761ec26f2c0fa0463625ca68b5a932c0a3949da1fa7ebefa9a61a361c2 3 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 68e155761ec26f2c0fa0463625ca68b5a932c0a3949da1fa7ebefa9a61a361c2 3 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=68e155761ec26f2c0fa0463625ca68b5a932c0a3949da1fa7ebefa9a61a361c2 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.tNd 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.tNd 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.tNd 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67423 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 67423 ']' 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:09.311 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.569 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:09.569 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:09.569 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67455 /var/tmp/host.sock 00:19:09.570 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 67455 ']' 00:19:09.570 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:19:09.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:09.570 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:09.570 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:09.570 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:09.570 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.827 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:09.827 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:09.827 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:09.827 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.827 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.085 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.085 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:10.085 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Nik 00:19:10.085 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.085 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.085 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.085 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Nik 00:19:10.085 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Nik 00:19:10.344 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.twe ]] 00:19:10.344 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.twe 00:19:10.344 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.344 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.344 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.344 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.twe 00:19:10.344 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.twe 00:19:10.602 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:10.602 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.c3C 00:19:10.602 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.602 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.602 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.602 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.c3C 00:19:10.602 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.c3C 00:19:10.861 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.q9Q ]] 00:19:10.861 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.q9Q 00:19:10.861 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.861 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.861 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.861 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.q9Q 00:19:10.861 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.q9Q 00:19:11.119 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:11.119 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.fRb 00:19:11.119 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.119 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.119 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.119 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.fRb 00:19:11.119 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.fRb 00:19:11.378 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.FBe ]] 00:19:11.378 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.FBe 00:19:11.378 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.378 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.378 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.378 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.FBe 00:19:11.378 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.FBe 00:19:11.636 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:11.636 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.tNd 00:19:11.636 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.636 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.636 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.636 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.tNd 00:19:11.636 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.tNd 00:19:11.895 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:11.895 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:11.895 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:11.895 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.895 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:11.895 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:12.154 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:12.154 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.154 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:12.154 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:12.154 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:12.154 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.154 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.154 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.154 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.154 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.154 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.154 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.154 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.413 00:19:12.413 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.413 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.413 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.672 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.672 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.672 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.672 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.672 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.672 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.672 { 00:19:12.672 "cntlid": 1, 00:19:12.672 "qid": 0, 00:19:12.672 "state": "enabled", 00:19:12.672 "thread": "nvmf_tgt_poll_group_000", 00:19:12.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:19:12.672 "listen_address": { 00:19:12.672 "trtype": "TCP", 00:19:12.672 "adrfam": "IPv4", 00:19:12.672 "traddr": "10.0.0.3", 00:19:12.672 "trsvcid": "4420" 00:19:12.672 }, 00:19:12.672 "peer_address": { 00:19:12.672 "trtype": "TCP", 00:19:12.672 "adrfam": "IPv4", 00:19:12.672 "traddr": "10.0.0.1", 00:19:12.672 "trsvcid": "54242" 00:19:12.672 }, 00:19:12.672 "auth": { 00:19:12.672 "state": "completed", 00:19:12.672 "digest": "sha256", 00:19:12.672 "dhgroup": "null" 00:19:12.672 } 00:19:12.672 } 00:19:12.672 ]' 00:19:12.672 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.931 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:12.931 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.931 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:12.931 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.931 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.931 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.931 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.190 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:19:13.190 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:19:18.479 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.479 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:19:18.479 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.479 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.479 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.479 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:18.479 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:18.479 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:18.479 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:18.479 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.480 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:18.480 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:18.480 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:18.480 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.480 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.480 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.480 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.480 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.480 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.480 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.480 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.480 00:19:18.480 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:18.480 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.480 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.738 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.738 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.738 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.738 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.738 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.738 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.738 { 00:19:18.738 "cntlid": 3, 00:19:18.738 "qid": 0, 00:19:18.738 "state": "enabled", 00:19:18.738 "thread": "nvmf_tgt_poll_group_000", 00:19:18.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:19:18.738 "listen_address": { 00:19:18.738 "trtype": "TCP", 00:19:18.738 "adrfam": "IPv4", 00:19:18.738 "traddr": "10.0.0.3", 00:19:18.738 "trsvcid": "4420" 00:19:18.738 }, 00:19:18.738 "peer_address": { 00:19:18.738 "trtype": "TCP", 00:19:18.738 "adrfam": "IPv4", 00:19:18.738 "traddr": "10.0.0.1", 00:19:18.738 "trsvcid": "49206" 00:19:18.738 }, 00:19:18.738 "auth": { 00:19:18.738 "state": "completed", 00:19:18.738 "digest": "sha256", 00:19:18.738 "dhgroup": "null" 00:19:18.738 } 00:19:18.738 } 00:19:18.738 ]' 00:19:18.738 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.997 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.997 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.997 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:18.997 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.997 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.997 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.997 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.255 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:19:19.255 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:19:20.188 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.188 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:19:20.188 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.188 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.188 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.188 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.188 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:20.188 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:20.446 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:20.446 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.446 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:20.446 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:20.446 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:20.446 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.446 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.446 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.446 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.446 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.446 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.446 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.446 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.704 00:19:20.704 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.704 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.704 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.962 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.963 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.963 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.963 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.963 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.963 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.963 { 00:19:20.963 "cntlid": 5, 00:19:20.963 "qid": 0, 00:19:20.963 "state": "enabled", 00:19:20.963 "thread": "nvmf_tgt_poll_group_000", 00:19:20.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:19:20.963 "listen_address": { 00:19:20.963 "trtype": "TCP", 00:19:20.963 "adrfam": "IPv4", 00:19:20.963 "traddr": "10.0.0.3", 00:19:20.963 "trsvcid": "4420" 00:19:20.963 }, 00:19:20.963 "peer_address": { 00:19:20.963 "trtype": "TCP", 00:19:20.963 "adrfam": "IPv4", 00:19:20.963 "traddr": "10.0.0.1", 00:19:20.963 "trsvcid": "49250" 00:19:20.963 }, 00:19:20.963 "auth": { 00:19:20.963 "state": "completed", 00:19:20.963 "digest": "sha256", 00:19:20.963 "dhgroup": "null" 00:19:20.963 } 00:19:20.963 } 00:19:20.963 ]' 00:19:20.963 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:21.221 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.221 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.221 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:21.221 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.221 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.221 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.221 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.479 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:19:21.479 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:19:22.046 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.046 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:19:22.046 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.046 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.046 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.046 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.047 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:22.047 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:22.337 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:22.337 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.337 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:22.337 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:22.337 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:22.337 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.337 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key3 00:19:22.337 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.337 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.337 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.337 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:22.337 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:22.337 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:22.904 00:19:22.904 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.904 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.904 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.162 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.162 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.162 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.162 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.162 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.162 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.162 { 00:19:23.162 "cntlid": 7, 00:19:23.162 "qid": 0, 00:19:23.162 "state": "enabled", 00:19:23.162 "thread": "nvmf_tgt_poll_group_000", 00:19:23.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:19:23.162 "listen_address": { 00:19:23.162 "trtype": "TCP", 00:19:23.162 "adrfam": "IPv4", 00:19:23.162 "traddr": "10.0.0.3", 00:19:23.162 "trsvcid": "4420" 00:19:23.162 }, 00:19:23.162 "peer_address": { 00:19:23.162 "trtype": "TCP", 00:19:23.162 "adrfam": "IPv4", 00:19:23.162 "traddr": "10.0.0.1", 00:19:23.162 "trsvcid": "49270" 00:19:23.162 }, 00:19:23.162 "auth": { 00:19:23.162 "state": "completed", 00:19:23.162 "digest": "sha256", 00:19:23.162 "dhgroup": "null" 00:19:23.162 } 00:19:23.162 } 00:19:23.162 ]' 00:19:23.162 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.162 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.162 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.162 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:23.162 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.162 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.162 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.162 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.729 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:19:23.730 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:19:24.297 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.297 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:19:24.297 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.297 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.297 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.297 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:24.297 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:24.297 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:24.297 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:24.556 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:24.556 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.556 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:24.556 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:24.556 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:24.556 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.556 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.556 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.556 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.556 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.556 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.556 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.556 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.814 00:19:24.814 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.814 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.814 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.073 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.073 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.073 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.073 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.073 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.073 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.073 { 00:19:25.073 "cntlid": 9, 00:19:25.073 "qid": 0, 00:19:25.073 "state": "enabled", 00:19:25.073 "thread": "nvmf_tgt_poll_group_000", 00:19:25.073 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:19:25.073 "listen_address": { 00:19:25.073 "trtype": "TCP", 00:19:25.073 "adrfam": "IPv4", 00:19:25.073 "traddr": "10.0.0.3", 00:19:25.073 "trsvcid": "4420" 00:19:25.073 }, 00:19:25.073 "peer_address": { 00:19:25.073 "trtype": "TCP", 00:19:25.073 "adrfam": "IPv4", 00:19:25.073 "traddr": "10.0.0.1", 00:19:25.073 "trsvcid": "49290" 00:19:25.073 }, 00:19:25.073 "auth": { 00:19:25.073 "state": "completed", 00:19:25.073 "digest": "sha256", 00:19:25.073 "dhgroup": "ffdhe2048" 00:19:25.073 } 00:19:25.073 } 00:19:25.073 ]' 00:19:25.073 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.332 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.332 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.332 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:25.332 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.332 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.332 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.332 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.590 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:19:25.590 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:19:26.156 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.156 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:19:26.156 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.156 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.156 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.156 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:26.156 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:26.156 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:26.413 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:26.413 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.413 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:26.414 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:26.414 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:26.414 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.414 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.414 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.414 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.414 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.414 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.414 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.414 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.980 00:19:26.980 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.980 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.980 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:27.238 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.238 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.238 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.238 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.238 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.238 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:27.238 { 00:19:27.238 "cntlid": 11, 00:19:27.238 "qid": 0, 00:19:27.238 "state": "enabled", 00:19:27.238 "thread": "nvmf_tgt_poll_group_000", 00:19:27.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:19:27.238 "listen_address": { 00:19:27.238 "trtype": "TCP", 00:19:27.238 "adrfam": "IPv4", 00:19:27.238 "traddr": "10.0.0.3", 00:19:27.238 "trsvcid": "4420" 00:19:27.238 }, 00:19:27.238 "peer_address": { 00:19:27.238 "trtype": "TCP", 00:19:27.238 "adrfam": "IPv4", 00:19:27.238 "traddr": "10.0.0.1", 00:19:27.239 "trsvcid": "49320" 00:19:27.239 }, 00:19:27.239 "auth": { 00:19:27.239 "state": "completed", 00:19:27.239 "digest": "sha256", 00:19:27.239 "dhgroup": "ffdhe2048" 00:19:27.239 } 00:19:27.239 } 00:19:27.239 ]' 00:19:27.239 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:27.239 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.239 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:27.239 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:27.239 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:27.239 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.239 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.239 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.497 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:19:27.497 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:19:28.437 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.437 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:19:28.437 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.437 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.437 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.437 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:28.437 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:28.437 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:28.696 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:28.696 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.696 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:28.696 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:28.696 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:28.696 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.696 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.696 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.696 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.696 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.696 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.696 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.696 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.956 00:19:28.956 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.956 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.956 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.524 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.524 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.524 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.524 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.524 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.524 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.524 { 00:19:29.524 "cntlid": 13, 00:19:29.524 "qid": 0, 00:19:29.524 "state": "enabled", 00:19:29.524 "thread": "nvmf_tgt_poll_group_000", 00:19:29.524 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:19:29.524 "listen_address": { 00:19:29.524 "trtype": "TCP", 00:19:29.524 "adrfam": "IPv4", 00:19:29.524 "traddr": "10.0.0.3", 00:19:29.524 "trsvcid": "4420" 00:19:29.524 }, 00:19:29.524 "peer_address": { 00:19:29.524 "trtype": "TCP", 00:19:29.524 "adrfam": "IPv4", 00:19:29.524 "traddr": "10.0.0.1", 00:19:29.524 "trsvcid": "36024" 00:19:29.524 }, 00:19:29.524 "auth": { 00:19:29.524 "state": "completed", 00:19:29.524 "digest": "sha256", 00:19:29.524 "dhgroup": "ffdhe2048" 00:19:29.524 } 00:19:29.524 } 00:19:29.524 ]' 00:19:29.524 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.524 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.524 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.524 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:29.524 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.524 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.524 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.524 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.782 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:19:29.782 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:19:30.390 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.390 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:19:30.390 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.390 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.390 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.390 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.390 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:30.390 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:30.957 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:30.957 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.957 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:30.957 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:30.957 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:30.957 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.957 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key3 00:19:30.957 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.957 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.957 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.957 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:30.957 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:30.957 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:31.215 00:19:31.215 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.215 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.215 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.474 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.474 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.474 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.474 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.474 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.474 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.474 { 00:19:31.474 "cntlid": 15, 00:19:31.474 "qid": 0, 00:19:31.474 "state": "enabled", 00:19:31.474 "thread": "nvmf_tgt_poll_group_000", 00:19:31.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:19:31.474 "listen_address": { 00:19:31.474 "trtype": "TCP", 00:19:31.474 "adrfam": "IPv4", 00:19:31.474 "traddr": "10.0.0.3", 00:19:31.474 "trsvcid": "4420" 00:19:31.474 }, 00:19:31.474 "peer_address": { 00:19:31.474 "trtype": "TCP", 00:19:31.474 "adrfam": "IPv4", 00:19:31.474 "traddr": "10.0.0.1", 00:19:31.474 "trsvcid": "36050" 00:19:31.474 }, 00:19:31.474 "auth": { 00:19:31.474 "state": "completed", 00:19:31.474 "digest": "sha256", 00:19:31.474 "dhgroup": "ffdhe2048" 00:19:31.474 } 00:19:31.474 } 00:19:31.474 ]' 00:19:31.474 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.474 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.474 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.474 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:31.474 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.733 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.733 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.733 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.992 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:19:31.992 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:19:32.560 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.560 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:19:32.560 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.560 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.560 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.560 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:32.560 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.560 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:32.560 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:33.176 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:33.176 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:33.176 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:33.176 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:33.176 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:33.176 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.176 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.176 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.176 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.176 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.176 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.176 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.176 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.435 00:19:33.435 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.435 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.435 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.694 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.694 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.694 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.694 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.694 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.694 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.694 { 00:19:33.694 "cntlid": 17, 00:19:33.694 "qid": 0, 00:19:33.694 "state": "enabled", 00:19:33.694 "thread": "nvmf_tgt_poll_group_000", 00:19:33.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:19:33.694 "listen_address": { 00:19:33.694 "trtype": "TCP", 00:19:33.694 "adrfam": "IPv4", 00:19:33.694 "traddr": "10.0.0.3", 00:19:33.694 "trsvcid": "4420" 00:19:33.694 }, 00:19:33.694 "peer_address": { 00:19:33.694 "trtype": "TCP", 00:19:33.694 "adrfam": "IPv4", 00:19:33.694 "traddr": "10.0.0.1", 00:19:33.694 "trsvcid": "36074" 00:19:33.694 }, 00:19:33.694 "auth": { 00:19:33.694 "state": "completed", 00:19:33.694 "digest": "sha256", 00:19:33.694 "dhgroup": "ffdhe3072" 00:19:33.694 } 00:19:33.694 } 00:19:33.694 ]' 00:19:33.694 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.694 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.694 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.694 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:33.694 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.952 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.952 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.952 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.211 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:19:34.211 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:19:34.777 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.777 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:19:34.777 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.777 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.777 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.777 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.777 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:34.777 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:35.036 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:35.036 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.036 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:35.036 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:35.036 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:35.036 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.036 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.036 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.036 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.036 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.036 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.036 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.036 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.602 00:19:35.602 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.602 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.602 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.862 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.862 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.862 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.862 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.862 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.862 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.862 { 00:19:35.862 "cntlid": 19, 00:19:35.862 "qid": 0, 00:19:35.862 "state": "enabled", 00:19:35.862 "thread": "nvmf_tgt_poll_group_000", 00:19:35.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:19:35.862 "listen_address": { 00:19:35.862 "trtype": "TCP", 00:19:35.862 "adrfam": "IPv4", 00:19:35.862 "traddr": "10.0.0.3", 00:19:35.862 "trsvcid": "4420" 00:19:35.862 }, 00:19:35.862 "peer_address": { 00:19:35.862 "trtype": "TCP", 00:19:35.862 "adrfam": "IPv4", 00:19:35.862 "traddr": "10.0.0.1", 00:19:35.862 "trsvcid": "36084" 00:19:35.862 }, 00:19:35.862 "auth": { 00:19:35.862 "state": "completed", 00:19:35.862 "digest": "sha256", 00:19:35.862 "dhgroup": "ffdhe3072" 00:19:35.862 } 00:19:35.862 } 00:19:35.862 ]' 00:19:35.862 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.862 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.862 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.862 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:35.862 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.862 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.862 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.862 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.121 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:19:36.121 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:19:37.054 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.054 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:19:37.054 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.054 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.055 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.055 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.055 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:37.055 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:37.313 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:37.313 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.313 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:37.313 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:37.313 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:37.313 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.313 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.313 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.313 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.313 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.313 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.313 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.313 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.609 00:19:37.609 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.609 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.609 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.175 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.175 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.175 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.175 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.175 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.175 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.175 { 00:19:38.175 "cntlid": 21, 00:19:38.175 "qid": 0, 00:19:38.175 "state": "enabled", 00:19:38.175 "thread": "nvmf_tgt_poll_group_000", 00:19:38.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:19:38.175 "listen_address": { 00:19:38.175 "trtype": "TCP", 00:19:38.175 "adrfam": "IPv4", 00:19:38.175 "traddr": "10.0.0.3", 00:19:38.175 "trsvcid": "4420" 00:19:38.175 }, 00:19:38.175 "peer_address": { 00:19:38.175 "trtype": "TCP", 00:19:38.175 "adrfam": "IPv4", 00:19:38.175 "traddr": "10.0.0.1", 00:19:38.175 "trsvcid": "36100" 00:19:38.175 }, 00:19:38.175 "auth": { 00:19:38.175 "state": "completed", 00:19:38.175 "digest": "sha256", 00:19:38.175 "dhgroup": "ffdhe3072" 00:19:38.175 } 00:19:38.175 } 00:19:38.175 ]' 00:19:38.175 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.175 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.175 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.175 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:38.175 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.175 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.175 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.175 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.434 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:19:38.434 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:19:39.369 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.369 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:19:39.369 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.369 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.369 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.369 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.369 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:39.369 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:39.628 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:39.628 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.628 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:39.628 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:39.628 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:39.628 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.628 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key3 00:19:39.628 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.628 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.628 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.628 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:39.628 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:39.628 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:39.886 00:19:39.886 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.886 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.886 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.144 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.144 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.144 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.144 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.144 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.144 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.144 { 00:19:40.144 "cntlid": 23, 00:19:40.144 "qid": 0, 00:19:40.144 "state": "enabled", 00:19:40.144 "thread": "nvmf_tgt_poll_group_000", 00:19:40.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:19:40.144 "listen_address": { 00:19:40.144 "trtype": "TCP", 00:19:40.144 "adrfam": "IPv4", 00:19:40.144 "traddr": "10.0.0.3", 00:19:40.144 "trsvcid": "4420" 00:19:40.144 }, 00:19:40.144 "peer_address": { 00:19:40.144 "trtype": "TCP", 00:19:40.144 "adrfam": "IPv4", 00:19:40.144 "traddr": "10.0.0.1", 00:19:40.144 "trsvcid": "56076" 00:19:40.144 }, 00:19:40.144 "auth": { 00:19:40.144 "state": "completed", 00:19:40.144 "digest": "sha256", 00:19:40.144 "dhgroup": "ffdhe3072" 00:19:40.144 } 00:19:40.144 } 00:19:40.144 ]' 00:19:40.144 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.144 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.144 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.403 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:40.403 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.403 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.403 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.403 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.661 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:19:40.662 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:19:41.229 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.229 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:19:41.229 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.229 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.229 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.229 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:41.229 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.229 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:41.229 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:41.495 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:41.495 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.495 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:41.495 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:41.495 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:41.495 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.495 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.495 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.495 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.495 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.495 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.495 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.495 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.062 00:19:42.062 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.062 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.062 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.320 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.320 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.320 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.321 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.321 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.321 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.321 { 00:19:42.321 "cntlid": 25, 00:19:42.321 "qid": 0, 00:19:42.321 "state": "enabled", 00:19:42.321 "thread": "nvmf_tgt_poll_group_000", 00:19:42.321 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:19:42.321 "listen_address": { 00:19:42.321 "trtype": "TCP", 00:19:42.321 "adrfam": "IPv4", 00:19:42.321 "traddr": "10.0.0.3", 00:19:42.321 "trsvcid": "4420" 00:19:42.321 }, 00:19:42.321 "peer_address": { 00:19:42.321 "trtype": "TCP", 00:19:42.321 "adrfam": "IPv4", 00:19:42.321 "traddr": "10.0.0.1", 00:19:42.321 "trsvcid": "56112" 00:19:42.321 }, 00:19:42.321 "auth": { 00:19:42.321 "state": "completed", 00:19:42.321 "digest": "sha256", 00:19:42.321 "dhgroup": "ffdhe4096" 00:19:42.321 } 00:19:42.321 } 00:19:42.321 ]' 00:19:42.321 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.321 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.321 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.321 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:42.321 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.321 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.321 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.321 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.887 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:19:42.887 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:19:43.455 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.455 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:19:43.455 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.455 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.455 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.455 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.455 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:43.455 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:44.021 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:44.021 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.021 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:44.021 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:44.021 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:44.021 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.021 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.021 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.021 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.021 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.021 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.021 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.021 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.279 00:19:44.279 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.279 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.279 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.536 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.536 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.536 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.536 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.536 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.536 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.536 { 00:19:44.536 "cntlid": 27, 00:19:44.536 "qid": 0, 00:19:44.536 "state": "enabled", 00:19:44.536 "thread": "nvmf_tgt_poll_group_000", 00:19:44.536 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:19:44.536 "listen_address": { 00:19:44.536 "trtype": "TCP", 00:19:44.536 "adrfam": "IPv4", 00:19:44.536 "traddr": "10.0.0.3", 00:19:44.536 "trsvcid": "4420" 00:19:44.536 }, 00:19:44.536 "peer_address": { 00:19:44.536 "trtype": "TCP", 00:19:44.536 "adrfam": "IPv4", 00:19:44.536 "traddr": "10.0.0.1", 00:19:44.536 "trsvcid": "56136" 00:19:44.536 }, 00:19:44.536 "auth": { 00:19:44.536 "state": "completed", 00:19:44.536 "digest": "sha256", 00:19:44.536 "dhgroup": "ffdhe4096" 00:19:44.536 } 00:19:44.536 } 00:19:44.536 ]' 00:19:44.536 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.536 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.536 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.794 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:44.794 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.794 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.794 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.794 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.053 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:19:45.053 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:19:45.620 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.620 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:19:45.620 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.620 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.878 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.878 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.878 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:45.878 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:46.138 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:46.138 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.138 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:46.138 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:46.138 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:46.138 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.138 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.138 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.138 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.138 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.138 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.138 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.138 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.396 00:19:46.655 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.655 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.655 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.914 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.914 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.914 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.914 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.914 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.914 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.914 { 00:19:46.914 "cntlid": 29, 00:19:46.914 "qid": 0, 00:19:46.914 "state": "enabled", 00:19:46.914 "thread": "nvmf_tgt_poll_group_000", 00:19:46.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:19:46.914 "listen_address": { 00:19:46.914 "trtype": "TCP", 00:19:46.914 "adrfam": "IPv4", 00:19:46.914 "traddr": "10.0.0.3", 00:19:46.914 "trsvcid": "4420" 00:19:46.914 }, 00:19:46.914 "peer_address": { 00:19:46.914 "trtype": "TCP", 00:19:46.914 "adrfam": "IPv4", 00:19:46.914 "traddr": "10.0.0.1", 00:19:46.914 "trsvcid": "56176" 00:19:46.914 }, 00:19:46.914 "auth": { 00:19:46.914 "state": "completed", 00:19:46.914 "digest": "sha256", 00:19:46.914 "dhgroup": "ffdhe4096" 00:19:46.914 } 00:19:46.914 } 00:19:46.914 ]' 00:19:46.914 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.914 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.914 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.172 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:47.172 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.172 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.172 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.172 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.431 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:19:47.431 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:19:48.036 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.036 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:19:48.036 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.036 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.036 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.036 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.036 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:48.036 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:48.601 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:48.601 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.601 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:48.601 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:48.601 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:48.601 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.601 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key3 00:19:48.601 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.601 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.601 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.601 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:48.601 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:48.601 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:48.859 00:19:48.859 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.859 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.859 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.117 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.117 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.117 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.117 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.117 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.117 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.117 { 00:19:49.117 "cntlid": 31, 00:19:49.117 "qid": 0, 00:19:49.117 "state": "enabled", 00:19:49.117 "thread": "nvmf_tgt_poll_group_000", 00:19:49.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:19:49.117 "listen_address": { 00:19:49.117 "trtype": "TCP", 00:19:49.117 "adrfam": "IPv4", 00:19:49.117 "traddr": "10.0.0.3", 00:19:49.117 "trsvcid": "4420" 00:19:49.117 }, 00:19:49.117 "peer_address": { 00:19:49.117 "trtype": "TCP", 00:19:49.117 "adrfam": "IPv4", 00:19:49.117 "traddr": "10.0.0.1", 00:19:49.117 "trsvcid": "35898" 00:19:49.117 }, 00:19:49.117 "auth": { 00:19:49.117 "state": "completed", 00:19:49.117 "digest": "sha256", 00:19:49.117 "dhgroup": "ffdhe4096" 00:19:49.117 } 00:19:49.117 } 00:19:49.117 ]' 00:19:49.117 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.375 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.375 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.375 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:49.375 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.375 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.376 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.376 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.633 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:19:49.634 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:19:50.620 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.620 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:19:50.620 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.620 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.620 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.620 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:50.620 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.620 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:50.620 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:50.879 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:50.879 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.879 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:50.879 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:50.879 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:50.879 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.879 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.879 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.879 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.879 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.879 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.879 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.879 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.445 00:19:51.445 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.445 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.445 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.703 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.703 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.703 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.703 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.703 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.703 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.703 { 00:19:51.703 "cntlid": 33, 00:19:51.703 "qid": 0, 00:19:51.703 "state": "enabled", 00:19:51.703 "thread": "nvmf_tgt_poll_group_000", 00:19:51.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:19:51.703 "listen_address": { 00:19:51.703 "trtype": "TCP", 00:19:51.703 "adrfam": "IPv4", 00:19:51.703 "traddr": "10.0.0.3", 00:19:51.703 "trsvcid": "4420" 00:19:51.703 }, 00:19:51.703 "peer_address": { 00:19:51.703 "trtype": "TCP", 00:19:51.703 "adrfam": "IPv4", 00:19:51.703 "traddr": "10.0.0.1", 00:19:51.703 "trsvcid": "35916" 00:19:51.703 }, 00:19:51.703 "auth": { 00:19:51.703 "state": "completed", 00:19:51.703 "digest": "sha256", 00:19:51.703 "dhgroup": "ffdhe6144" 00:19:51.703 } 00:19:51.703 } 00:19:51.703 ]' 00:19:51.703 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.703 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.704 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.704 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:51.704 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.704 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.704 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.704 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.962 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:19:51.962 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:19:52.528 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.528 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:19:52.528 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.528 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.786 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.786 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.786 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:52.786 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:53.044 19:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:53.044 19:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.044 19:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:53.044 19:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:53.044 19:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:53.044 19:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.044 19:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.044 19:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.044 19:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.044 19:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.044 19:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.044 19:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.044 19:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.609 00:19:53.609 19:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.609 19:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.609 19:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.868 19:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.868 19:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.868 19:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.868 19:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.868 19:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.868 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.868 { 00:19:53.868 "cntlid": 35, 00:19:53.868 "qid": 0, 00:19:53.868 "state": "enabled", 00:19:53.868 "thread": "nvmf_tgt_poll_group_000", 00:19:53.868 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:19:53.868 "listen_address": { 00:19:53.868 "trtype": "TCP", 00:19:53.868 "adrfam": "IPv4", 00:19:53.868 "traddr": "10.0.0.3", 00:19:53.868 "trsvcid": "4420" 00:19:53.868 }, 00:19:53.868 "peer_address": { 00:19:53.868 "trtype": "TCP", 00:19:53.868 "adrfam": "IPv4", 00:19:53.868 "traddr": "10.0.0.1", 00:19:53.868 "trsvcid": "35942" 00:19:53.868 }, 00:19:53.868 "auth": { 00:19:53.868 "state": "completed", 00:19:53.868 "digest": "sha256", 00:19:53.868 "dhgroup": "ffdhe6144" 00:19:53.868 } 00:19:53.868 } 00:19:53.868 ]' 00:19:53.868 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.868 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.868 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.868 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:53.868 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.126 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.126 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.126 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.384 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:19:54.384 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:19:54.973 19:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.973 19:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:19:54.973 19:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.973 19:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.973 19:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.973 19:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.973 19:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:54.973 19:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:55.231 19:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:55.231 19:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.231 19:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:55.231 19:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:55.231 19:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:55.231 19:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.231 19:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.231 19:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.231 19:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.231 19:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.231 19:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.231 19:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.231 19:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.798 00:19:55.798 19:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.798 19:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.798 19:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.057 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.057 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.057 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.057 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.057 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.057 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.057 { 00:19:56.057 "cntlid": 37, 00:19:56.057 "qid": 0, 00:19:56.057 "state": "enabled", 00:19:56.057 "thread": "nvmf_tgt_poll_group_000", 00:19:56.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:19:56.057 "listen_address": { 00:19:56.057 "trtype": "TCP", 00:19:56.057 "adrfam": "IPv4", 00:19:56.057 "traddr": "10.0.0.3", 00:19:56.057 "trsvcid": "4420" 00:19:56.057 }, 00:19:56.057 "peer_address": { 00:19:56.057 "trtype": "TCP", 00:19:56.057 "adrfam": "IPv4", 00:19:56.057 "traddr": "10.0.0.1", 00:19:56.057 "trsvcid": "35964" 00:19:56.057 }, 00:19:56.057 "auth": { 00:19:56.057 "state": "completed", 00:19:56.057 "digest": "sha256", 00:19:56.057 "dhgroup": "ffdhe6144" 00:19:56.057 } 00:19:56.057 } 00:19:56.057 ]' 00:19:56.057 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.314 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.314 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.314 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:56.314 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.314 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.314 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.314 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.572 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:19:56.572 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:19:57.506 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.506 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:19:57.506 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.506 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.506 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.506 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.506 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:57.506 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:57.507 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:57.507 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.507 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:57.507 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:57.507 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:57.507 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.507 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key3 00:19:57.507 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.507 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.507 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.507 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:57.507 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:57.507 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:58.441 00:19:58.441 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.442 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.442 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.442 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.442 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.442 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.442 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.442 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.442 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.442 { 00:19:58.442 "cntlid": 39, 00:19:58.442 "qid": 0, 00:19:58.442 "state": "enabled", 00:19:58.442 "thread": "nvmf_tgt_poll_group_000", 00:19:58.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:19:58.442 "listen_address": { 00:19:58.442 "trtype": "TCP", 00:19:58.442 "adrfam": "IPv4", 00:19:58.442 "traddr": "10.0.0.3", 00:19:58.442 "trsvcid": "4420" 00:19:58.442 }, 00:19:58.442 "peer_address": { 00:19:58.442 "trtype": "TCP", 00:19:58.442 "adrfam": "IPv4", 00:19:58.442 "traddr": "10.0.0.1", 00:19:58.442 "trsvcid": "33798" 00:19:58.442 }, 00:19:58.442 "auth": { 00:19:58.442 "state": "completed", 00:19:58.442 "digest": "sha256", 00:19:58.442 "dhgroup": "ffdhe6144" 00:19:58.442 } 00:19:58.442 } 00:19:58.442 ]' 00:19:58.442 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.701 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.701 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.701 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:58.701 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.701 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.701 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.701 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.960 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:19:58.960 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:19:59.918 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.918 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:19:59.918 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.918 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.918 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.918 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:59.918 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.918 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:59.918 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:00.177 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:00.177 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.177 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:00.177 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:00.177 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:00.177 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.177 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.177 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.177 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.177 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.177 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.177 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.177 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.129 00:20:01.129 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.129 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.129 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.388 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.388 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.388 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.388 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.388 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.388 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.388 { 00:20:01.388 "cntlid": 41, 00:20:01.388 "qid": 0, 00:20:01.388 "state": "enabled", 00:20:01.388 "thread": "nvmf_tgt_poll_group_000", 00:20:01.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:20:01.388 "listen_address": { 00:20:01.388 "trtype": "TCP", 00:20:01.388 "adrfam": "IPv4", 00:20:01.388 "traddr": "10.0.0.3", 00:20:01.388 "trsvcid": "4420" 00:20:01.388 }, 00:20:01.388 "peer_address": { 00:20:01.388 "trtype": "TCP", 00:20:01.388 "adrfam": "IPv4", 00:20:01.388 "traddr": "10.0.0.1", 00:20:01.388 "trsvcid": "33806" 00:20:01.388 }, 00:20:01.388 "auth": { 00:20:01.388 "state": "completed", 00:20:01.388 "digest": "sha256", 00:20:01.388 "dhgroup": "ffdhe8192" 00:20:01.388 } 00:20:01.388 } 00:20:01.388 ]' 00:20:01.388 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.388 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:01.388 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.388 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:01.388 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.388 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.388 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.388 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.647 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:20:01.647 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:20:02.581 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.581 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:20:02.581 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.581 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.581 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.581 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.581 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:02.581 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:02.840 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:02.840 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.840 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:02.840 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:02.840 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:02.840 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.840 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.840 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.840 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.840 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.840 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.840 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.840 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.407 00:20:03.407 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.407 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.407 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.665 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.665 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.924 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.924 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.924 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.924 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.924 { 00:20:03.924 "cntlid": 43, 00:20:03.924 "qid": 0, 00:20:03.924 "state": "enabled", 00:20:03.924 "thread": "nvmf_tgt_poll_group_000", 00:20:03.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:20:03.924 "listen_address": { 00:20:03.924 "trtype": "TCP", 00:20:03.924 "adrfam": "IPv4", 00:20:03.924 "traddr": "10.0.0.3", 00:20:03.924 "trsvcid": "4420" 00:20:03.924 }, 00:20:03.924 "peer_address": { 00:20:03.924 "trtype": "TCP", 00:20:03.924 "adrfam": "IPv4", 00:20:03.924 "traddr": "10.0.0.1", 00:20:03.924 "trsvcid": "33836" 00:20:03.924 }, 00:20:03.924 "auth": { 00:20:03.924 "state": "completed", 00:20:03.924 "digest": "sha256", 00:20:03.924 "dhgroup": "ffdhe8192" 00:20:03.924 } 00:20:03.924 } 00:20:03.924 ]' 00:20:03.924 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.924 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.924 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.924 19:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:03.924 19:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.924 19:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.924 19:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.924 19:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.183 19:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:20:04.183 19:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:20:05.119 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.119 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:20:05.119 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.119 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.119 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.119 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.119 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:05.119 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:05.378 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:05.378 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.378 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:05.378 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:05.378 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:05.378 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.378 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.378 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.378 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.378 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.378 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.378 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.378 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.944 00:20:05.944 19:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.944 19:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.944 19:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.202 19:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.202 19:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.202 19:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.202 19:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.460 19:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.460 19:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.460 { 00:20:06.460 "cntlid": 45, 00:20:06.460 "qid": 0, 00:20:06.460 "state": "enabled", 00:20:06.460 "thread": "nvmf_tgt_poll_group_000", 00:20:06.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:20:06.460 "listen_address": { 00:20:06.460 "trtype": "TCP", 00:20:06.460 "adrfam": "IPv4", 00:20:06.460 "traddr": "10.0.0.3", 00:20:06.460 "trsvcid": "4420" 00:20:06.460 }, 00:20:06.460 "peer_address": { 00:20:06.460 "trtype": "TCP", 00:20:06.460 "adrfam": "IPv4", 00:20:06.460 "traddr": "10.0.0.1", 00:20:06.460 "trsvcid": "33868" 00:20:06.460 }, 00:20:06.460 "auth": { 00:20:06.460 "state": "completed", 00:20:06.460 "digest": "sha256", 00:20:06.460 "dhgroup": "ffdhe8192" 00:20:06.460 } 00:20:06.460 } 00:20:06.460 ]' 00:20:06.460 19:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.460 19:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.460 19:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.460 19:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:06.460 19:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.460 19:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.460 19:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.460 19:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.718 19:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:20:06.718 19:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:20:07.653 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.653 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:20:07.653 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.653 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.653 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.653 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.653 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:07.653 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:07.912 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:07.912 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.912 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:07.912 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:07.912 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:07.912 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.912 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key3 00:20:07.912 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.912 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.912 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.912 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:07.912 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.912 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:08.861 00:20:08.861 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.861 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.861 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.120 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.120 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.120 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.120 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.120 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.120 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.120 { 00:20:09.120 "cntlid": 47, 00:20:09.120 "qid": 0, 00:20:09.120 "state": "enabled", 00:20:09.120 "thread": "nvmf_tgt_poll_group_000", 00:20:09.120 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:20:09.120 "listen_address": { 00:20:09.120 "trtype": "TCP", 00:20:09.120 "adrfam": "IPv4", 00:20:09.120 "traddr": "10.0.0.3", 00:20:09.120 "trsvcid": "4420" 00:20:09.120 }, 00:20:09.120 "peer_address": { 00:20:09.120 "trtype": "TCP", 00:20:09.120 "adrfam": "IPv4", 00:20:09.120 "traddr": "10.0.0.1", 00:20:09.120 "trsvcid": "40914" 00:20:09.120 }, 00:20:09.120 "auth": { 00:20:09.120 "state": "completed", 00:20:09.120 "digest": "sha256", 00:20:09.120 "dhgroup": "ffdhe8192" 00:20:09.120 } 00:20:09.120 } 00:20:09.120 ]' 00:20:09.120 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.120 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.120 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.120 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:09.120 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.120 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.120 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.120 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.378 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:20:09.378 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:20:10.313 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.313 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:20:10.313 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.313 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.313 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.313 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:10.313 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:10.313 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.313 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:10.313 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:10.572 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:10.572 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.572 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:10.572 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:10.572 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:10.572 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.572 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.572 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.572 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.572 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.572 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.572 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.572 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.831 00:20:10.831 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.831 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.831 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.090 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.090 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.090 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.090 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.090 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.090 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.090 { 00:20:11.090 "cntlid": 49, 00:20:11.090 "qid": 0, 00:20:11.090 "state": "enabled", 00:20:11.090 "thread": "nvmf_tgt_poll_group_000", 00:20:11.090 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:20:11.090 "listen_address": { 00:20:11.090 "trtype": "TCP", 00:20:11.090 "adrfam": "IPv4", 00:20:11.090 "traddr": "10.0.0.3", 00:20:11.090 "trsvcid": "4420" 00:20:11.090 }, 00:20:11.090 "peer_address": { 00:20:11.090 "trtype": "TCP", 00:20:11.090 "adrfam": "IPv4", 00:20:11.090 "traddr": "10.0.0.1", 00:20:11.090 "trsvcid": "40954" 00:20:11.090 }, 00:20:11.090 "auth": { 00:20:11.090 "state": "completed", 00:20:11.090 "digest": "sha384", 00:20:11.090 "dhgroup": "null" 00:20:11.090 } 00:20:11.090 } 00:20:11.090 ]' 00:20:11.090 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.349 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:11.349 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.349 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:11.349 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.349 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.349 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.349 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.608 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:20:11.608 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:20:12.541 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.541 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:20:12.541 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.541 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.541 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.541 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.541 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:12.541 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:12.541 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:12.541 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.541 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:12.541 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:12.541 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:12.541 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.541 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.541 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.541 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.541 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.541 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.541 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.541 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.106 00:20:13.106 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.106 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.107 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.365 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.365 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.365 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.365 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.365 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.365 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.365 { 00:20:13.365 "cntlid": 51, 00:20:13.365 "qid": 0, 00:20:13.365 "state": "enabled", 00:20:13.365 "thread": "nvmf_tgt_poll_group_000", 00:20:13.365 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:20:13.365 "listen_address": { 00:20:13.365 "trtype": "TCP", 00:20:13.365 "adrfam": "IPv4", 00:20:13.365 "traddr": "10.0.0.3", 00:20:13.365 "trsvcid": "4420" 00:20:13.365 }, 00:20:13.365 "peer_address": { 00:20:13.365 "trtype": "TCP", 00:20:13.365 "adrfam": "IPv4", 00:20:13.365 "traddr": "10.0.0.1", 00:20:13.365 "trsvcid": "40972" 00:20:13.365 }, 00:20:13.365 "auth": { 00:20:13.365 "state": "completed", 00:20:13.365 "digest": "sha384", 00:20:13.365 "dhgroup": "null" 00:20:13.365 } 00:20:13.365 } 00:20:13.365 ]' 00:20:13.365 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.365 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.365 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.365 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:13.365 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.624 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.624 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.624 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.882 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:20:13.882 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:20:14.817 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.818 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:20:14.818 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.818 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.818 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.818 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.818 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:14.818 19:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:15.076 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:15.076 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.076 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:15.076 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:15.076 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:15.076 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.076 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.076 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.077 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.077 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.077 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.077 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.077 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.337 00:20:15.337 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.337 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.337 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.905 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.905 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.905 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.905 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.905 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.905 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.905 { 00:20:15.905 "cntlid": 53, 00:20:15.905 "qid": 0, 00:20:15.905 "state": "enabled", 00:20:15.905 "thread": "nvmf_tgt_poll_group_000", 00:20:15.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:20:15.905 "listen_address": { 00:20:15.905 "trtype": "TCP", 00:20:15.905 "adrfam": "IPv4", 00:20:15.905 "traddr": "10.0.0.3", 00:20:15.905 "trsvcid": "4420" 00:20:15.905 }, 00:20:15.905 "peer_address": { 00:20:15.905 "trtype": "TCP", 00:20:15.905 "adrfam": "IPv4", 00:20:15.905 "traddr": "10.0.0.1", 00:20:15.905 "trsvcid": "41000" 00:20:15.905 }, 00:20:15.905 "auth": { 00:20:15.905 "state": "completed", 00:20:15.905 "digest": "sha384", 00:20:15.905 "dhgroup": "null" 00:20:15.905 } 00:20:15.905 } 00:20:15.905 ]' 00:20:15.905 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.905 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.905 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.905 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:15.905 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.905 19:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.905 19:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.905 19:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.163 19:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:20:16.163 19:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:20:17.099 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.099 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:20:17.099 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.099 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.099 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.099 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.099 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:17.099 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:17.099 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:17.099 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.099 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:17.099 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:17.099 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:17.099 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.099 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key3 00:20:17.099 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.099 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.099 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.099 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:17.099 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:17.099 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:17.668 00:20:17.668 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.668 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.668 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.926 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.926 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.926 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.926 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.926 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.926 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.926 { 00:20:17.926 "cntlid": 55, 00:20:17.926 "qid": 0, 00:20:17.926 "state": "enabled", 00:20:17.926 "thread": "nvmf_tgt_poll_group_000", 00:20:17.926 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:20:17.926 "listen_address": { 00:20:17.926 "trtype": "TCP", 00:20:17.926 "adrfam": "IPv4", 00:20:17.926 "traddr": "10.0.0.3", 00:20:17.926 "trsvcid": "4420" 00:20:17.926 }, 00:20:17.926 "peer_address": { 00:20:17.926 "trtype": "TCP", 00:20:17.926 "adrfam": "IPv4", 00:20:17.926 "traddr": "10.0.0.1", 00:20:17.926 "trsvcid": "41032" 00:20:17.926 }, 00:20:17.926 "auth": { 00:20:17.926 "state": "completed", 00:20:17.926 "digest": "sha384", 00:20:17.926 "dhgroup": "null" 00:20:17.926 } 00:20:17.926 } 00:20:17.926 ]' 00:20:17.926 19:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.926 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.926 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.926 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:17.926 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.926 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.926 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.926 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.182 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:20:18.183 19:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:20:19.118 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.118 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:20:19.118 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.118 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.118 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.118 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:19.118 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.118 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:19.118 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:19.377 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:19.377 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.377 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:19.377 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:19.377 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:19.377 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.377 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.377 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.377 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.377 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.377 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.377 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.377 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.635 00:20:19.635 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.635 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.635 19:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.893 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.893 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.893 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.893 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.893 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.893 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.893 { 00:20:19.893 "cntlid": 57, 00:20:19.893 "qid": 0, 00:20:19.893 "state": "enabled", 00:20:19.893 "thread": "nvmf_tgt_poll_group_000", 00:20:19.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:20:19.893 "listen_address": { 00:20:19.893 "trtype": "TCP", 00:20:19.893 "adrfam": "IPv4", 00:20:19.893 "traddr": "10.0.0.3", 00:20:19.893 "trsvcid": "4420" 00:20:19.893 }, 00:20:19.893 "peer_address": { 00:20:19.893 "trtype": "TCP", 00:20:19.893 "adrfam": "IPv4", 00:20:19.893 "traddr": "10.0.0.1", 00:20:19.893 "trsvcid": "60788" 00:20:19.893 }, 00:20:19.893 "auth": { 00:20:19.893 "state": "completed", 00:20:19.893 "digest": "sha384", 00:20:19.893 "dhgroup": "ffdhe2048" 00:20:19.893 } 00:20:19.893 } 00:20:19.893 ]' 00:20:19.893 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.893 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.893 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.152 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:20.152 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.152 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.152 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.152 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.410 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:20:20.410 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:20:21.347 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.347 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:20:21.347 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.347 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.347 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.347 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.347 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:21.347 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:21.347 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:21.347 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.347 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:21.347 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:21.347 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:21.347 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.347 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.347 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.347 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.606 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.606 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.606 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.606 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.864 00:20:21.864 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.864 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.864 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.122 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.122 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.122 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.122 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.122 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.122 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.122 { 00:20:22.122 "cntlid": 59, 00:20:22.122 "qid": 0, 00:20:22.122 "state": "enabled", 00:20:22.122 "thread": "nvmf_tgt_poll_group_000", 00:20:22.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:20:22.122 "listen_address": { 00:20:22.122 "trtype": "TCP", 00:20:22.122 "adrfam": "IPv4", 00:20:22.122 "traddr": "10.0.0.3", 00:20:22.122 "trsvcid": "4420" 00:20:22.122 }, 00:20:22.122 "peer_address": { 00:20:22.122 "trtype": "TCP", 00:20:22.122 "adrfam": "IPv4", 00:20:22.122 "traddr": "10.0.0.1", 00:20:22.122 "trsvcid": "60816" 00:20:22.122 }, 00:20:22.122 "auth": { 00:20:22.122 "state": "completed", 00:20:22.122 "digest": "sha384", 00:20:22.122 "dhgroup": "ffdhe2048" 00:20:22.122 } 00:20:22.122 } 00:20:22.122 ]' 00:20:22.122 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.122 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.122 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.380 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:22.380 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.380 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.380 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.381 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.639 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:20:22.639 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:20:23.222 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.222 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:20:23.222 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.222 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.222 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.222 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.222 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:23.222 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:23.789 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:23.789 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.789 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:23.789 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:23.789 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:23.789 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.789 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.789 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.789 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.789 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.789 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.789 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.789 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.047 00:20:24.047 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.047 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.047 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.306 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.306 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.306 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.306 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.306 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.306 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.306 { 00:20:24.306 "cntlid": 61, 00:20:24.306 "qid": 0, 00:20:24.306 "state": "enabled", 00:20:24.306 "thread": "nvmf_tgt_poll_group_000", 00:20:24.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:20:24.306 "listen_address": { 00:20:24.306 "trtype": "TCP", 00:20:24.306 "adrfam": "IPv4", 00:20:24.306 "traddr": "10.0.0.3", 00:20:24.306 "trsvcid": "4420" 00:20:24.306 }, 00:20:24.306 "peer_address": { 00:20:24.306 "trtype": "TCP", 00:20:24.306 "adrfam": "IPv4", 00:20:24.306 "traddr": "10.0.0.1", 00:20:24.306 "trsvcid": "60844" 00:20:24.306 }, 00:20:24.306 "auth": { 00:20:24.306 "state": "completed", 00:20:24.306 "digest": "sha384", 00:20:24.306 "dhgroup": "ffdhe2048" 00:20:24.306 } 00:20:24.306 } 00:20:24.306 ]' 00:20:24.306 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.306 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.306 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.564 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:24.564 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.564 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.564 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.564 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.823 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:20:24.823 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:20:25.759 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.759 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:20:25.759 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.759 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.759 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.759 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.759 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:25.759 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:26.016 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:26.017 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.017 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:26.017 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:26.017 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:26.017 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.017 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key3 00:20:26.017 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.017 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.017 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.017 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:26.017 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:26.017 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:26.274 00:20:26.274 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.274 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.274 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.533 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.533 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.533 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.533 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.533 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.533 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.533 { 00:20:26.533 "cntlid": 63, 00:20:26.533 "qid": 0, 00:20:26.533 "state": "enabled", 00:20:26.533 "thread": "nvmf_tgt_poll_group_000", 00:20:26.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:20:26.533 "listen_address": { 00:20:26.533 "trtype": "TCP", 00:20:26.533 "adrfam": "IPv4", 00:20:26.533 "traddr": "10.0.0.3", 00:20:26.533 "trsvcid": "4420" 00:20:26.533 }, 00:20:26.533 "peer_address": { 00:20:26.533 "trtype": "TCP", 00:20:26.533 "adrfam": "IPv4", 00:20:26.533 "traddr": "10.0.0.1", 00:20:26.533 "trsvcid": "60876" 00:20:26.533 }, 00:20:26.533 "auth": { 00:20:26.533 "state": "completed", 00:20:26.533 "digest": "sha384", 00:20:26.533 "dhgroup": "ffdhe2048" 00:20:26.533 } 00:20:26.533 } 00:20:26.533 ]' 00:20:26.533 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.792 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.792 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.792 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:26.792 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.792 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.792 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.792 19:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.050 19:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:20:27.050 19:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:20:27.984 19:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.985 19:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:20:27.985 19:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.985 19:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.985 19:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.985 19:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:27.985 19:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.985 19:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:27.985 19:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:27.985 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:27.985 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.985 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:27.985 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:27.985 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:27.985 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.985 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.985 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.985 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.243 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.243 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.243 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.243 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.501 00:20:28.501 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.501 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.501 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.759 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.759 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.759 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.759 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.759 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.759 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.759 { 00:20:28.759 "cntlid": 65, 00:20:28.759 "qid": 0, 00:20:28.759 "state": "enabled", 00:20:28.759 "thread": "nvmf_tgt_poll_group_000", 00:20:28.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:20:28.759 "listen_address": { 00:20:28.759 "trtype": "TCP", 00:20:28.759 "adrfam": "IPv4", 00:20:28.759 "traddr": "10.0.0.3", 00:20:28.759 "trsvcid": "4420" 00:20:28.759 }, 00:20:28.759 "peer_address": { 00:20:28.759 "trtype": "TCP", 00:20:28.759 "adrfam": "IPv4", 00:20:28.759 "traddr": "10.0.0.1", 00:20:28.759 "trsvcid": "49724" 00:20:28.759 }, 00:20:28.759 "auth": { 00:20:28.759 "state": "completed", 00:20:28.759 "digest": "sha384", 00:20:28.759 "dhgroup": "ffdhe3072" 00:20:28.759 } 00:20:28.759 } 00:20:28.759 ]' 00:20:28.759 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.759 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.759 19:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.019 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:29.019 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.019 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.019 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.019 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.278 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:20:29.278 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:20:29.845 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.845 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:20:29.845 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.845 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.103 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.103 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.103 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:30.103 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:30.362 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:30.362 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.362 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:30.362 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:30.362 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:30.362 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.362 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.362 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.362 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.362 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.362 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.362 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.362 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.620 00:20:30.620 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.620 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.620 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.878 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.136 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.136 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.136 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.136 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.136 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.136 { 00:20:31.136 "cntlid": 67, 00:20:31.136 "qid": 0, 00:20:31.136 "state": "enabled", 00:20:31.136 "thread": "nvmf_tgt_poll_group_000", 00:20:31.136 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:20:31.136 "listen_address": { 00:20:31.136 "trtype": "TCP", 00:20:31.136 "adrfam": "IPv4", 00:20:31.136 "traddr": "10.0.0.3", 00:20:31.136 "trsvcid": "4420" 00:20:31.136 }, 00:20:31.136 "peer_address": { 00:20:31.136 "trtype": "TCP", 00:20:31.136 "adrfam": "IPv4", 00:20:31.136 "traddr": "10.0.0.1", 00:20:31.136 "trsvcid": "49748" 00:20:31.136 }, 00:20:31.136 "auth": { 00:20:31.136 "state": "completed", 00:20:31.136 "digest": "sha384", 00:20:31.136 "dhgroup": "ffdhe3072" 00:20:31.136 } 00:20:31.136 } 00:20:31.136 ]' 00:20:31.136 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.136 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.136 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.136 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:31.136 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.137 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.137 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.137 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.394 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:20:31.394 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:20:32.329 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.329 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:20:32.329 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.329 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.329 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.329 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.329 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:32.329 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:32.587 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:32.587 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.587 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:32.587 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:32.587 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:32.587 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.587 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.587 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.587 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.587 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.587 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.587 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.587 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.845 00:20:32.845 19:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.845 19:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.845 19:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.412 19:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.412 19:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.412 19:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.412 19:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.412 19:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.412 19:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.412 { 00:20:33.412 "cntlid": 69, 00:20:33.412 "qid": 0, 00:20:33.412 "state": "enabled", 00:20:33.412 "thread": "nvmf_tgt_poll_group_000", 00:20:33.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:20:33.412 "listen_address": { 00:20:33.412 "trtype": "TCP", 00:20:33.412 "adrfam": "IPv4", 00:20:33.412 "traddr": "10.0.0.3", 00:20:33.412 "trsvcid": "4420" 00:20:33.412 }, 00:20:33.412 "peer_address": { 00:20:33.412 "trtype": "TCP", 00:20:33.412 "adrfam": "IPv4", 00:20:33.412 "traddr": "10.0.0.1", 00:20:33.412 "trsvcid": "49772" 00:20:33.412 }, 00:20:33.412 "auth": { 00:20:33.412 "state": "completed", 00:20:33.412 "digest": "sha384", 00:20:33.412 "dhgroup": "ffdhe3072" 00:20:33.412 } 00:20:33.412 } 00:20:33.412 ]' 00:20:33.412 19:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.412 19:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.412 19:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.412 19:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:33.412 19:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.412 19:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.412 19:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.412 19:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.979 19:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:20:33.979 19:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:20:34.545 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.545 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:20:34.545 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.545 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.545 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.545 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.545 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:34.545 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:35.118 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:35.118 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.118 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:35.118 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:35.118 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:35.118 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.118 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key3 00:20:35.118 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.118 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.118 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.118 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:35.118 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:35.118 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:35.377 00:20:35.377 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.377 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.377 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.634 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.634 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.634 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.634 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.634 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.634 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.634 { 00:20:35.634 "cntlid": 71, 00:20:35.634 "qid": 0, 00:20:35.634 "state": "enabled", 00:20:35.634 "thread": "nvmf_tgt_poll_group_000", 00:20:35.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:20:35.634 "listen_address": { 00:20:35.634 "trtype": "TCP", 00:20:35.634 "adrfam": "IPv4", 00:20:35.634 "traddr": "10.0.0.3", 00:20:35.634 "trsvcid": "4420" 00:20:35.634 }, 00:20:35.634 "peer_address": { 00:20:35.634 "trtype": "TCP", 00:20:35.634 "adrfam": "IPv4", 00:20:35.634 "traddr": "10.0.0.1", 00:20:35.634 "trsvcid": "49806" 00:20:35.634 }, 00:20:35.634 "auth": { 00:20:35.635 "state": "completed", 00:20:35.635 "digest": "sha384", 00:20:35.635 "dhgroup": "ffdhe3072" 00:20:35.635 } 00:20:35.635 } 00:20:35.635 ]' 00:20:35.635 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.892 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.892 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.892 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:35.892 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.892 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.892 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.892 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.151 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:20:36.151 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:20:37.133 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.133 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:20:37.133 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.133 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.133 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.133 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:37.133 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.133 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:37.133 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:37.392 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:37.392 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.392 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:37.392 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:37.392 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:37.392 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.392 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.392 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.392 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.392 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.392 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.392 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.392 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.651 00:20:37.651 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.651 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.651 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.217 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.217 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.217 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.217 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.217 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.217 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.217 { 00:20:38.217 "cntlid": 73, 00:20:38.217 "qid": 0, 00:20:38.217 "state": "enabled", 00:20:38.217 "thread": "nvmf_tgt_poll_group_000", 00:20:38.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:20:38.217 "listen_address": { 00:20:38.217 "trtype": "TCP", 00:20:38.217 "adrfam": "IPv4", 00:20:38.217 "traddr": "10.0.0.3", 00:20:38.217 "trsvcid": "4420" 00:20:38.217 }, 00:20:38.217 "peer_address": { 00:20:38.217 "trtype": "TCP", 00:20:38.217 "adrfam": "IPv4", 00:20:38.217 "traddr": "10.0.0.1", 00:20:38.217 "trsvcid": "49838" 00:20:38.217 }, 00:20:38.217 "auth": { 00:20:38.217 "state": "completed", 00:20:38.217 "digest": "sha384", 00:20:38.217 "dhgroup": "ffdhe4096" 00:20:38.217 } 00:20:38.217 } 00:20:38.217 ]' 00:20:38.217 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.217 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.217 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.217 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:38.217 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.217 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.217 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.217 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.475 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:20:38.475 19:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:20:39.408 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.409 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:20:39.409 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.409 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.409 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.409 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.409 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:39.409 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:39.665 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:39.666 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.666 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:39.666 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:39.666 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:39.666 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.666 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.666 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.666 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.666 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.666 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.666 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.666 19:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.230 00:20:40.230 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.230 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.230 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.489 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.489 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.489 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.489 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.489 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.489 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.489 { 00:20:40.489 "cntlid": 75, 00:20:40.489 "qid": 0, 00:20:40.489 "state": "enabled", 00:20:40.489 "thread": "nvmf_tgt_poll_group_000", 00:20:40.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:20:40.489 "listen_address": { 00:20:40.489 "trtype": "TCP", 00:20:40.489 "adrfam": "IPv4", 00:20:40.489 "traddr": "10.0.0.3", 00:20:40.489 "trsvcid": "4420" 00:20:40.489 }, 00:20:40.489 "peer_address": { 00:20:40.489 "trtype": "TCP", 00:20:40.489 "adrfam": "IPv4", 00:20:40.489 "traddr": "10.0.0.1", 00:20:40.489 "trsvcid": "60452" 00:20:40.489 }, 00:20:40.489 "auth": { 00:20:40.489 "state": "completed", 00:20:40.489 "digest": "sha384", 00:20:40.489 "dhgroup": "ffdhe4096" 00:20:40.489 } 00:20:40.489 } 00:20:40.489 ]' 00:20:40.489 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.489 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.489 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.489 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:40.489 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.489 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.489 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.489 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.055 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:20:41.055 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:20:41.621 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.621 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:20:41.621 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.621 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.621 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.621 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.621 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:41.621 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:42.189 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:42.189 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.189 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:42.189 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:42.189 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:42.189 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.189 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.189 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.189 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.189 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.189 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.189 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.189 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.451 00:20:42.451 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.451 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.451 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.723 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.723 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.723 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.723 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.723 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.723 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.723 { 00:20:42.723 "cntlid": 77, 00:20:42.723 "qid": 0, 00:20:42.723 "state": "enabled", 00:20:42.723 "thread": "nvmf_tgt_poll_group_000", 00:20:42.723 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:20:42.723 "listen_address": { 00:20:42.723 "trtype": "TCP", 00:20:42.723 "adrfam": "IPv4", 00:20:42.723 "traddr": "10.0.0.3", 00:20:42.723 "trsvcid": "4420" 00:20:42.723 }, 00:20:42.723 "peer_address": { 00:20:42.723 "trtype": "TCP", 00:20:42.723 "adrfam": "IPv4", 00:20:42.723 "traddr": "10.0.0.1", 00:20:42.723 "trsvcid": "60486" 00:20:42.723 }, 00:20:42.723 "auth": { 00:20:42.723 "state": "completed", 00:20:42.723 "digest": "sha384", 00:20:42.723 "dhgroup": "ffdhe4096" 00:20:42.723 } 00:20:42.723 } 00:20:42.723 ]' 00:20:42.723 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.981 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.981 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.981 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:42.981 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.981 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.981 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.981 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.239 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:20:43.239 19:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:20:44.174 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.174 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:20:44.174 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.174 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.174 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.174 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.175 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:44.175 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:44.435 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:44.435 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.435 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:44.435 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:44.435 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:44.435 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.435 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key3 00:20:44.435 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.435 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.435 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.435 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:44.435 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:44.435 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:44.692 00:20:44.950 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.950 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.950 19:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.256 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.256 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.256 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.256 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.256 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.256 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.256 { 00:20:45.256 "cntlid": 79, 00:20:45.256 "qid": 0, 00:20:45.256 "state": "enabled", 00:20:45.256 "thread": "nvmf_tgt_poll_group_000", 00:20:45.256 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:20:45.256 "listen_address": { 00:20:45.256 "trtype": "TCP", 00:20:45.256 "adrfam": "IPv4", 00:20:45.256 "traddr": "10.0.0.3", 00:20:45.256 "trsvcid": "4420" 00:20:45.256 }, 00:20:45.256 "peer_address": { 00:20:45.256 "trtype": "TCP", 00:20:45.256 "adrfam": "IPv4", 00:20:45.256 "traddr": "10.0.0.1", 00:20:45.256 "trsvcid": "60510" 00:20:45.256 }, 00:20:45.256 "auth": { 00:20:45.256 "state": "completed", 00:20:45.256 "digest": "sha384", 00:20:45.256 "dhgroup": "ffdhe4096" 00:20:45.256 } 00:20:45.256 } 00:20:45.256 ]' 00:20:45.256 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.256 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.256 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.256 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:45.256 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.512 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.512 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.512 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.770 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:20:45.770 19:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:20:46.703 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.703 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:20:46.703 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.703 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.703 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.703 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:46.703 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.703 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:46.703 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:46.703 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:46.703 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.703 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:46.703 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:46.703 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:46.703 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.703 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.703 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.703 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.960 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.960 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.960 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.960 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.525 00:20:47.525 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.525 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.525 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.783 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.783 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.783 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.783 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.783 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.783 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.783 { 00:20:47.783 "cntlid": 81, 00:20:47.783 "qid": 0, 00:20:47.783 "state": "enabled", 00:20:47.783 "thread": "nvmf_tgt_poll_group_000", 00:20:47.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:20:47.783 "listen_address": { 00:20:47.783 "trtype": "TCP", 00:20:47.783 "adrfam": "IPv4", 00:20:47.783 "traddr": "10.0.0.3", 00:20:47.783 "trsvcid": "4420" 00:20:47.783 }, 00:20:47.783 "peer_address": { 00:20:47.783 "trtype": "TCP", 00:20:47.783 "adrfam": "IPv4", 00:20:47.783 "traddr": "10.0.0.1", 00:20:47.783 "trsvcid": "60548" 00:20:47.783 }, 00:20:47.783 "auth": { 00:20:47.783 "state": "completed", 00:20:47.783 "digest": "sha384", 00:20:47.783 "dhgroup": "ffdhe6144" 00:20:47.783 } 00:20:47.783 } 00:20:47.783 ]' 00:20:47.783 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.783 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.783 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.783 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:47.783 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.783 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.783 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.783 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.348 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:20:48.349 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:20:48.915 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.915 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:20:48.916 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.916 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.916 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.916 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.916 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:48.916 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:49.481 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:49.481 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.481 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:49.481 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:49.481 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:49.481 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.481 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.481 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.481 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.481 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.481 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.481 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.481 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.046 00:20:50.046 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.047 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.047 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.305 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.305 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.305 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.305 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.305 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.305 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.305 { 00:20:50.305 "cntlid": 83, 00:20:50.305 "qid": 0, 00:20:50.305 "state": "enabled", 00:20:50.305 "thread": "nvmf_tgt_poll_group_000", 00:20:50.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:20:50.305 "listen_address": { 00:20:50.305 "trtype": "TCP", 00:20:50.305 "adrfam": "IPv4", 00:20:50.305 "traddr": "10.0.0.3", 00:20:50.305 "trsvcid": "4420" 00:20:50.305 }, 00:20:50.305 "peer_address": { 00:20:50.305 "trtype": "TCP", 00:20:50.305 "adrfam": "IPv4", 00:20:50.305 "traddr": "10.0.0.1", 00:20:50.305 "trsvcid": "36320" 00:20:50.305 }, 00:20:50.305 "auth": { 00:20:50.305 "state": "completed", 00:20:50.305 "digest": "sha384", 00:20:50.305 "dhgroup": "ffdhe6144" 00:20:50.305 } 00:20:50.305 } 00:20:50.305 ]' 00:20:50.305 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.305 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.305 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.305 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:50.305 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.305 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.305 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.305 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.873 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:20:50.873 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:20:51.439 19:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.440 19:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:20:51.440 19:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.440 19:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.697 19:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.697 19:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.697 19:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:51.697 19:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:51.955 19:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:51.955 19:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.955 19:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:51.955 19:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:51.955 19:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:51.955 19:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.955 19:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.955 19:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.955 19:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.955 19:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.955 19:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.955 19:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.955 19:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.521 00:20:52.521 19:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.521 19:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.521 19:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.780 19:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.780 19:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.780 19:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.780 19:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.780 19:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.780 19:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.780 { 00:20:52.780 "cntlid": 85, 00:20:52.780 "qid": 0, 00:20:52.780 "state": "enabled", 00:20:52.780 "thread": "nvmf_tgt_poll_group_000", 00:20:52.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:20:52.780 "listen_address": { 00:20:52.780 "trtype": "TCP", 00:20:52.780 "adrfam": "IPv4", 00:20:52.780 "traddr": "10.0.0.3", 00:20:52.780 "trsvcid": "4420" 00:20:52.780 }, 00:20:52.780 "peer_address": { 00:20:52.780 "trtype": "TCP", 00:20:52.780 "adrfam": "IPv4", 00:20:52.780 "traddr": "10.0.0.1", 00:20:52.780 "trsvcid": "36358" 00:20:52.780 }, 00:20:52.780 "auth": { 00:20:52.780 "state": "completed", 00:20:52.780 "digest": "sha384", 00:20:52.780 "dhgroup": "ffdhe6144" 00:20:52.780 } 00:20:52.780 } 00:20:52.780 ]' 00:20:52.780 19:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.780 19:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.780 19:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.780 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:52.780 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.038 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.038 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.038 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.296 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:20:53.296 19:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:20:53.862 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.862 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:20:53.862 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.862 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.862 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.862 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.862 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:53.862 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:54.497 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:54.497 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.497 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:54.497 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:54.497 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:54.497 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.497 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key3 00:20:54.497 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.497 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.497 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.497 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:54.497 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:54.497 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:54.756 00:20:54.756 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.756 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.756 19:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.015 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.015 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.015 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.015 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.015 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.015 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.015 { 00:20:55.015 "cntlid": 87, 00:20:55.015 "qid": 0, 00:20:55.015 "state": "enabled", 00:20:55.015 "thread": "nvmf_tgt_poll_group_000", 00:20:55.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:20:55.015 "listen_address": { 00:20:55.015 "trtype": "TCP", 00:20:55.015 "adrfam": "IPv4", 00:20:55.015 "traddr": "10.0.0.3", 00:20:55.015 "trsvcid": "4420" 00:20:55.015 }, 00:20:55.015 "peer_address": { 00:20:55.015 "trtype": "TCP", 00:20:55.015 "adrfam": "IPv4", 00:20:55.015 "traddr": "10.0.0.1", 00:20:55.015 "trsvcid": "36394" 00:20:55.015 }, 00:20:55.015 "auth": { 00:20:55.015 "state": "completed", 00:20:55.015 "digest": "sha384", 00:20:55.015 "dhgroup": "ffdhe6144" 00:20:55.015 } 00:20:55.015 } 00:20:55.015 ]' 00:20:55.015 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.015 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.015 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.274 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:55.274 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.274 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.274 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.274 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.533 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:20:55.533 19:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:20:56.469 19:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.469 19:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:20:56.469 19:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.469 19:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.469 19:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.469 19:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:56.469 19:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.469 19:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:56.469 19:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:56.469 19:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:56.469 19:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.469 19:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:56.469 19:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:56.469 19:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:56.469 19:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.469 19:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.469 19:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.469 19:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.469 19:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.469 19:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.469 19:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.469 19:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.429 00:20:57.429 19:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.429 19:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.429 19:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.687 19:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.687 19:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.687 19:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.687 19:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.687 19:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.687 19:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.687 { 00:20:57.687 "cntlid": 89, 00:20:57.687 "qid": 0, 00:20:57.687 "state": "enabled", 00:20:57.687 "thread": "nvmf_tgt_poll_group_000", 00:20:57.687 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:20:57.687 "listen_address": { 00:20:57.687 "trtype": "TCP", 00:20:57.687 "adrfam": "IPv4", 00:20:57.687 "traddr": "10.0.0.3", 00:20:57.687 "trsvcid": "4420" 00:20:57.687 }, 00:20:57.687 "peer_address": { 00:20:57.687 "trtype": "TCP", 00:20:57.687 "adrfam": "IPv4", 00:20:57.687 "traddr": "10.0.0.1", 00:20:57.687 "trsvcid": "36414" 00:20:57.687 }, 00:20:57.688 "auth": { 00:20:57.688 "state": "completed", 00:20:57.688 "digest": "sha384", 00:20:57.688 "dhgroup": "ffdhe8192" 00:20:57.688 } 00:20:57.688 } 00:20:57.688 ]' 00:20:57.688 19:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.688 19:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.688 19:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.688 19:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:57.688 19:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.946 19:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.946 19:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.946 19:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.203 19:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:20:58.203 19:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:20:58.770 19:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.029 19:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:20:59.029 19:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.029 19:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.029 19:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.029 19:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.029 19:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:59.029 19:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:59.304 19:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:59.304 19:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.304 19:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:59.304 19:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:59.304 19:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:59.304 19:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.304 19:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.304 19:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.304 19:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.304 19:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.304 19:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.304 19:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.304 19:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.894 00:20:59.894 19:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.894 19:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.894 19:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.153 19:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.153 19:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.153 19:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.153 19:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.153 19:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.153 19:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.153 { 00:21:00.153 "cntlid": 91, 00:21:00.153 "qid": 0, 00:21:00.153 "state": "enabled", 00:21:00.153 "thread": "nvmf_tgt_poll_group_000", 00:21:00.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:21:00.153 "listen_address": { 00:21:00.153 "trtype": "TCP", 00:21:00.153 "adrfam": "IPv4", 00:21:00.153 "traddr": "10.0.0.3", 00:21:00.153 "trsvcid": "4420" 00:21:00.153 }, 00:21:00.153 "peer_address": { 00:21:00.153 "trtype": "TCP", 00:21:00.153 "adrfam": "IPv4", 00:21:00.153 "traddr": "10.0.0.1", 00:21:00.153 "trsvcid": "60360" 00:21:00.153 }, 00:21:00.153 "auth": { 00:21:00.153 "state": "completed", 00:21:00.153 "digest": "sha384", 00:21:00.153 "dhgroup": "ffdhe8192" 00:21:00.153 } 00:21:00.153 } 00:21:00.153 ]' 00:21:00.153 19:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.411 19:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.411 19:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.411 19:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:00.411 19:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.411 19:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.411 19:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.411 19:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.669 19:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:21:00.669 19:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:21:01.606 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.606 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:21:01.606 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.606 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.606 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.606 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.606 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:01.606 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:01.606 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:01.606 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.606 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:01.606 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:01.606 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:01.606 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.606 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.606 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.606 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.606 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.606 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.606 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.606 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.543 00:21:02.543 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.543 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.543 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.802 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.802 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.802 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.802 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.802 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.802 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.802 { 00:21:02.802 "cntlid": 93, 00:21:02.802 "qid": 0, 00:21:02.802 "state": "enabled", 00:21:02.802 "thread": "nvmf_tgt_poll_group_000", 00:21:02.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:21:02.802 "listen_address": { 00:21:02.802 "trtype": "TCP", 00:21:02.802 "adrfam": "IPv4", 00:21:02.802 "traddr": "10.0.0.3", 00:21:02.802 "trsvcid": "4420" 00:21:02.802 }, 00:21:02.802 "peer_address": { 00:21:02.802 "trtype": "TCP", 00:21:02.802 "adrfam": "IPv4", 00:21:02.802 "traddr": "10.0.0.1", 00:21:02.802 "trsvcid": "60382" 00:21:02.802 }, 00:21:02.802 "auth": { 00:21:02.802 "state": "completed", 00:21:02.802 "digest": "sha384", 00:21:02.802 "dhgroup": "ffdhe8192" 00:21:02.802 } 00:21:02.802 } 00:21:02.802 ]' 00:21:02.802 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.802 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.802 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.802 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:02.802 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.802 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.802 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.802 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.060 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:21:03.060 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:21:03.993 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.993 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:21:03.993 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.993 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.993 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.993 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.993 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:03.993 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:04.252 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:04.252 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.252 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:04.252 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:04.252 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:04.252 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.252 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key3 00:21:04.252 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.252 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.252 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.252 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:04.252 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.252 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:05.186 00:21:05.186 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.186 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.186 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.444 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.445 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.445 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.445 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.445 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.445 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.445 { 00:21:05.445 "cntlid": 95, 00:21:05.445 "qid": 0, 00:21:05.445 "state": "enabled", 00:21:05.445 "thread": "nvmf_tgt_poll_group_000", 00:21:05.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:21:05.445 "listen_address": { 00:21:05.445 "trtype": "TCP", 00:21:05.445 "adrfam": "IPv4", 00:21:05.445 "traddr": "10.0.0.3", 00:21:05.445 "trsvcid": "4420" 00:21:05.445 }, 00:21:05.445 "peer_address": { 00:21:05.445 "trtype": "TCP", 00:21:05.445 "adrfam": "IPv4", 00:21:05.445 "traddr": "10.0.0.1", 00:21:05.445 "trsvcid": "60402" 00:21:05.445 }, 00:21:05.445 "auth": { 00:21:05.445 "state": "completed", 00:21:05.445 "digest": "sha384", 00:21:05.445 "dhgroup": "ffdhe8192" 00:21:05.445 } 00:21:05.445 } 00:21:05.445 ]' 00:21:05.445 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.445 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.445 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.445 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:05.445 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.445 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.445 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.445 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.011 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:21:06.011 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:21:06.960 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.960 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:21:06.960 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.960 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.960 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.960 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:06.960 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.960 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.960 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:06.960 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:07.218 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:07.218 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.218 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:07.218 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:07.218 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:07.219 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.219 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.219 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.219 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.219 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.219 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.219 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.219 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.477 00:21:07.477 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.477 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.477 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.735 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.735 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.735 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.735 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.735 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.735 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.735 { 00:21:07.735 "cntlid": 97, 00:21:07.735 "qid": 0, 00:21:07.735 "state": "enabled", 00:21:07.735 "thread": "nvmf_tgt_poll_group_000", 00:21:07.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:21:07.735 "listen_address": { 00:21:07.735 "trtype": "TCP", 00:21:07.735 "adrfam": "IPv4", 00:21:07.735 "traddr": "10.0.0.3", 00:21:07.735 "trsvcid": "4420" 00:21:07.735 }, 00:21:07.735 "peer_address": { 00:21:07.735 "trtype": "TCP", 00:21:07.735 "adrfam": "IPv4", 00:21:07.735 "traddr": "10.0.0.1", 00:21:07.735 "trsvcid": "60436" 00:21:07.735 }, 00:21:07.735 "auth": { 00:21:07.735 "state": "completed", 00:21:07.735 "digest": "sha512", 00:21:07.735 "dhgroup": "null" 00:21:07.735 } 00:21:07.735 } 00:21:07.735 ]' 00:21:07.735 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.735 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.735 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.993 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:07.993 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.993 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.993 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.993 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.252 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:21:08.252 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:21:08.818 19:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.818 19:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:21:08.818 19:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.818 19:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.078 19:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.078 19:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.078 19:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:09.078 19:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:09.338 19:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:09.338 19:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.338 19:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:09.338 19:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:09.338 19:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:09.338 19:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.338 19:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.338 19:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.338 19:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.338 19:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.338 19:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.338 19:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.338 19:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.596 00:21:09.596 19:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.596 19:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.596 19:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.853 19:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.853 19:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.853 19:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.853 19:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.853 19:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.853 19:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.853 { 00:21:09.853 "cntlid": 99, 00:21:09.853 "qid": 0, 00:21:09.853 "state": "enabled", 00:21:09.853 "thread": "nvmf_tgt_poll_group_000", 00:21:09.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:21:09.853 "listen_address": { 00:21:09.853 "trtype": "TCP", 00:21:09.853 "adrfam": "IPv4", 00:21:09.853 "traddr": "10.0.0.3", 00:21:09.853 "trsvcid": "4420" 00:21:09.853 }, 00:21:09.853 "peer_address": { 00:21:09.853 "trtype": "TCP", 00:21:09.853 "adrfam": "IPv4", 00:21:09.853 "traddr": "10.0.0.1", 00:21:09.853 "trsvcid": "38572" 00:21:09.853 }, 00:21:09.853 "auth": { 00:21:09.853 "state": "completed", 00:21:09.853 "digest": "sha512", 00:21:09.853 "dhgroup": "null" 00:21:09.853 } 00:21:09.853 } 00:21:09.853 ]' 00:21:09.853 19:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.853 19:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.853 19:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.112 19:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:10.112 19:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.112 19:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.112 19:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.112 19:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.370 19:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:21:10.370 19:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:21:10.937 19:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.937 19:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:21:10.937 19:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.937 19:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.937 19:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.937 19:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.937 19:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:10.937 19:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:11.195 19:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:11.195 19:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.195 19:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:11.195 19:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:11.195 19:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:11.195 19:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.195 19:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.195 19:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.195 19:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.195 19:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.195 19:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.195 19:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.195 19:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.453 00:21:11.453 19:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.710 19:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.711 19:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.969 19:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.969 19:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.969 19:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.969 19:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.969 19:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.969 19:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.969 { 00:21:11.969 "cntlid": 101, 00:21:11.969 "qid": 0, 00:21:11.969 "state": "enabled", 00:21:11.969 "thread": "nvmf_tgt_poll_group_000", 00:21:11.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:21:11.969 "listen_address": { 00:21:11.969 "trtype": "TCP", 00:21:11.969 "adrfam": "IPv4", 00:21:11.969 "traddr": "10.0.0.3", 00:21:11.969 "trsvcid": "4420" 00:21:11.969 }, 00:21:11.969 "peer_address": { 00:21:11.969 "trtype": "TCP", 00:21:11.969 "adrfam": "IPv4", 00:21:11.969 "traddr": "10.0.0.1", 00:21:11.969 "trsvcid": "38612" 00:21:11.969 }, 00:21:11.969 "auth": { 00:21:11.969 "state": "completed", 00:21:11.969 "digest": "sha512", 00:21:11.969 "dhgroup": "null" 00:21:11.969 } 00:21:11.969 } 00:21:11.969 ]' 00:21:11.969 19:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.969 19:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.969 19:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.969 19:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:11.969 19:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.969 19:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.969 19:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.969 19:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.228 19:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:21:12.228 19:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:21:13.163 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.163 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:21:13.163 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.163 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.163 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.163 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.163 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:13.163 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:13.422 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:13.422 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.422 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:13.422 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:13.422 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:13.422 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.422 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key3 00:21:13.422 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.422 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.422 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.422 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:13.422 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:13.422 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:13.680 00:21:13.680 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.680 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.680 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.939 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.939 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.939 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.939 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.939 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.939 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.939 { 00:21:13.939 "cntlid": 103, 00:21:13.939 "qid": 0, 00:21:13.939 "state": "enabled", 00:21:13.939 "thread": "nvmf_tgt_poll_group_000", 00:21:13.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:21:13.939 "listen_address": { 00:21:13.939 "trtype": "TCP", 00:21:13.939 "adrfam": "IPv4", 00:21:13.939 "traddr": "10.0.0.3", 00:21:13.939 "trsvcid": "4420" 00:21:13.939 }, 00:21:13.939 "peer_address": { 00:21:13.939 "trtype": "TCP", 00:21:13.939 "adrfam": "IPv4", 00:21:13.939 "traddr": "10.0.0.1", 00:21:13.939 "trsvcid": "38632" 00:21:13.939 }, 00:21:13.939 "auth": { 00:21:13.939 "state": "completed", 00:21:13.939 "digest": "sha512", 00:21:13.939 "dhgroup": "null" 00:21:13.939 } 00:21:13.939 } 00:21:13.939 ]' 00:21:13.939 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.939 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.939 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.197 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:14.197 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.197 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.197 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.197 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.455 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:21:14.455 19:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:21:15.021 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.021 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:21:15.021 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.021 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.021 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.021 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.021 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.021 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:15.021 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:15.280 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:15.280 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.280 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:15.280 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:15.280 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:15.280 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.280 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.280 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.280 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.280 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.280 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.280 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.280 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.855 00:21:15.855 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.855 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.855 19:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.114 19:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.114 19:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.114 19:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.114 19:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.114 19:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.114 19:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.114 { 00:21:16.114 "cntlid": 105, 00:21:16.114 "qid": 0, 00:21:16.114 "state": "enabled", 00:21:16.114 "thread": "nvmf_tgt_poll_group_000", 00:21:16.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:21:16.114 "listen_address": { 00:21:16.114 "trtype": "TCP", 00:21:16.114 "adrfam": "IPv4", 00:21:16.114 "traddr": "10.0.0.3", 00:21:16.114 "trsvcid": "4420" 00:21:16.114 }, 00:21:16.114 "peer_address": { 00:21:16.114 "trtype": "TCP", 00:21:16.114 "adrfam": "IPv4", 00:21:16.114 "traddr": "10.0.0.1", 00:21:16.114 "trsvcid": "38660" 00:21:16.114 }, 00:21:16.114 "auth": { 00:21:16.114 "state": "completed", 00:21:16.114 "digest": "sha512", 00:21:16.114 "dhgroup": "ffdhe2048" 00:21:16.114 } 00:21:16.114 } 00:21:16.114 ]' 00:21:16.114 19:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.114 19:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.114 19:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.114 19:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:16.114 19:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.114 19:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.114 19:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.114 19:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.373 19:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:21:16.373 19:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:21:17.307 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.307 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:21:17.307 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.307 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.307 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.307 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.307 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.307 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.569 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:17.569 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.569 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:17.569 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:17.569 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:17.569 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.569 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.569 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.569 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.569 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.569 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.569 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.569 19:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.828 00:21:17.828 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.828 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.828 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.086 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.086 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.086 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.086 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.086 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.086 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.086 { 00:21:18.086 "cntlid": 107, 00:21:18.086 "qid": 0, 00:21:18.086 "state": "enabled", 00:21:18.086 "thread": "nvmf_tgt_poll_group_000", 00:21:18.086 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:21:18.086 "listen_address": { 00:21:18.086 "trtype": "TCP", 00:21:18.086 "adrfam": "IPv4", 00:21:18.086 "traddr": "10.0.0.3", 00:21:18.086 "trsvcid": "4420" 00:21:18.086 }, 00:21:18.086 "peer_address": { 00:21:18.086 "trtype": "TCP", 00:21:18.086 "adrfam": "IPv4", 00:21:18.086 "traddr": "10.0.0.1", 00:21:18.086 "trsvcid": "49864" 00:21:18.086 }, 00:21:18.086 "auth": { 00:21:18.086 "state": "completed", 00:21:18.086 "digest": "sha512", 00:21:18.086 "dhgroup": "ffdhe2048" 00:21:18.086 } 00:21:18.086 } 00:21:18.086 ]' 00:21:18.086 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.345 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.345 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.345 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:18.345 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.345 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.345 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.345 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.604 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:21:18.604 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:21:19.188 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.188 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:21:19.188 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.188 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.188 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.188 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.188 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:19.188 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:19.446 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:19.446 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.446 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:19.446 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:19.446 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:19.446 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.446 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.446 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.446 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.704 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.704 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.704 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.704 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.964 00:21:19.964 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.964 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.964 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.223 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.223 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.223 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.223 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.223 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.223 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.223 { 00:21:20.223 "cntlid": 109, 00:21:20.223 "qid": 0, 00:21:20.223 "state": "enabled", 00:21:20.223 "thread": "nvmf_tgt_poll_group_000", 00:21:20.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:21:20.223 "listen_address": { 00:21:20.223 "trtype": "TCP", 00:21:20.223 "adrfam": "IPv4", 00:21:20.223 "traddr": "10.0.0.3", 00:21:20.223 "trsvcid": "4420" 00:21:20.223 }, 00:21:20.223 "peer_address": { 00:21:20.223 "trtype": "TCP", 00:21:20.223 "adrfam": "IPv4", 00:21:20.223 "traddr": "10.0.0.1", 00:21:20.223 "trsvcid": "49894" 00:21:20.223 }, 00:21:20.223 "auth": { 00:21:20.223 "state": "completed", 00:21:20.223 "digest": "sha512", 00:21:20.223 "dhgroup": "ffdhe2048" 00:21:20.223 } 00:21:20.223 } 00:21:20.223 ]' 00:21:20.223 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.223 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.223 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.482 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:20.482 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.482 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.482 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.482 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.741 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:21:20.741 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:21:21.308 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.308 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:21:21.308 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.308 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.308 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.308 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.308 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:21.308 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:21.567 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:21.567 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.567 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:21.567 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:21.567 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:21.567 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.567 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key3 00:21:21.567 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.567 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.567 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.567 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:21.567 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:21.567 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:22.135 00:21:22.135 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.135 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.135 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.394 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.394 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.394 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.394 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.394 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.394 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.394 { 00:21:22.394 "cntlid": 111, 00:21:22.394 "qid": 0, 00:21:22.394 "state": "enabled", 00:21:22.394 "thread": "nvmf_tgt_poll_group_000", 00:21:22.394 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:21:22.394 "listen_address": { 00:21:22.394 "trtype": "TCP", 00:21:22.394 "adrfam": "IPv4", 00:21:22.394 "traddr": "10.0.0.3", 00:21:22.394 "trsvcid": "4420" 00:21:22.394 }, 00:21:22.394 "peer_address": { 00:21:22.394 "trtype": "TCP", 00:21:22.394 "adrfam": "IPv4", 00:21:22.394 "traddr": "10.0.0.1", 00:21:22.394 "trsvcid": "49932" 00:21:22.394 }, 00:21:22.394 "auth": { 00:21:22.394 "state": "completed", 00:21:22.394 "digest": "sha512", 00:21:22.394 "dhgroup": "ffdhe2048" 00:21:22.394 } 00:21:22.394 } 00:21:22.394 ]' 00:21:22.394 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.394 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.394 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.394 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:22.394 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.652 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.653 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.653 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.911 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:21:22.911 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:21:23.478 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.478 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:21:23.478 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.478 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.478 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.478 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:23.478 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.478 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:23.478 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:24.044 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:24.044 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.044 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:24.044 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:24.044 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:24.044 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.044 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.044 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.044 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.044 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.044 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.044 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.044 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.302 00:21:24.302 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.302 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.302 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.560 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.560 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.560 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.560 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.560 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.560 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.560 { 00:21:24.560 "cntlid": 113, 00:21:24.560 "qid": 0, 00:21:24.560 "state": "enabled", 00:21:24.560 "thread": "nvmf_tgt_poll_group_000", 00:21:24.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:21:24.560 "listen_address": { 00:21:24.560 "trtype": "TCP", 00:21:24.560 "adrfam": "IPv4", 00:21:24.560 "traddr": "10.0.0.3", 00:21:24.560 "trsvcid": "4420" 00:21:24.560 }, 00:21:24.560 "peer_address": { 00:21:24.560 "trtype": "TCP", 00:21:24.560 "adrfam": "IPv4", 00:21:24.560 "traddr": "10.0.0.1", 00:21:24.560 "trsvcid": "49976" 00:21:24.560 }, 00:21:24.560 "auth": { 00:21:24.560 "state": "completed", 00:21:24.560 "digest": "sha512", 00:21:24.560 "dhgroup": "ffdhe3072" 00:21:24.560 } 00:21:24.560 } 00:21:24.560 ]' 00:21:24.560 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.560 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.560 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.560 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:24.560 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.819 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.819 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.819 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.078 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:21:25.078 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:21:26.015 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.015 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:21:26.015 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.015 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.015 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.015 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.015 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.015 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.273 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:26.273 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.273 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:26.273 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:26.273 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:26.273 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.273 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.273 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.273 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.273 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.273 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.273 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.273 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.531 00:21:26.531 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.531 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.531 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.789 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.789 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.789 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.789 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.789 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.789 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.789 { 00:21:26.789 "cntlid": 115, 00:21:26.789 "qid": 0, 00:21:26.789 "state": "enabled", 00:21:26.789 "thread": "nvmf_tgt_poll_group_000", 00:21:26.789 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:21:26.789 "listen_address": { 00:21:26.789 "trtype": "TCP", 00:21:26.789 "adrfam": "IPv4", 00:21:26.789 "traddr": "10.0.0.3", 00:21:26.789 "trsvcid": "4420" 00:21:26.789 }, 00:21:26.789 "peer_address": { 00:21:26.789 "trtype": "TCP", 00:21:26.789 "adrfam": "IPv4", 00:21:26.789 "traddr": "10.0.0.1", 00:21:26.789 "trsvcid": "50018" 00:21:26.789 }, 00:21:26.789 "auth": { 00:21:26.789 "state": "completed", 00:21:26.789 "digest": "sha512", 00:21:26.789 "dhgroup": "ffdhe3072" 00:21:26.789 } 00:21:26.789 } 00:21:26.789 ]' 00:21:26.789 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.789 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.789 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.047 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:27.047 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.047 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.047 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.047 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.306 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:21:27.306 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:21:28.241 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.241 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:21:28.241 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.241 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.241 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.241 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.241 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:28.241 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:28.241 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:28.241 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.241 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.241 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:28.241 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:28.241 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.241 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.241 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.241 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.241 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.241 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.241 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.241 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.807 00:21:28.807 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.807 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.807 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.064 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.064 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.064 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.064 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.064 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.064 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.064 { 00:21:29.064 "cntlid": 117, 00:21:29.064 "qid": 0, 00:21:29.064 "state": "enabled", 00:21:29.064 "thread": "nvmf_tgt_poll_group_000", 00:21:29.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:21:29.064 "listen_address": { 00:21:29.064 "trtype": "TCP", 00:21:29.064 "adrfam": "IPv4", 00:21:29.064 "traddr": "10.0.0.3", 00:21:29.064 "trsvcid": "4420" 00:21:29.064 }, 00:21:29.064 "peer_address": { 00:21:29.064 "trtype": "TCP", 00:21:29.064 "adrfam": "IPv4", 00:21:29.064 "traddr": "10.0.0.1", 00:21:29.064 "trsvcid": "34456" 00:21:29.064 }, 00:21:29.064 "auth": { 00:21:29.064 "state": "completed", 00:21:29.064 "digest": "sha512", 00:21:29.064 "dhgroup": "ffdhe3072" 00:21:29.064 } 00:21:29.064 } 00:21:29.064 ]' 00:21:29.064 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.065 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.065 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.323 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:29.323 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.323 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.323 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.323 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.581 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:21:29.581 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:21:30.516 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.516 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:21:30.516 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.516 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.516 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.516 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.516 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:30.516 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:30.774 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:30.774 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.774 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:30.774 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:30.774 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:30.774 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.774 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key3 00:21:30.774 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.774 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.774 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.774 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:30.774 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:30.774 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:31.032 00:21:31.032 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.032 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.032 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.599 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.599 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.599 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.599 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.599 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.599 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.599 { 00:21:31.599 "cntlid": 119, 00:21:31.599 "qid": 0, 00:21:31.599 "state": "enabled", 00:21:31.599 "thread": "nvmf_tgt_poll_group_000", 00:21:31.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:21:31.599 "listen_address": { 00:21:31.599 "trtype": "TCP", 00:21:31.599 "adrfam": "IPv4", 00:21:31.599 "traddr": "10.0.0.3", 00:21:31.599 "trsvcid": "4420" 00:21:31.599 }, 00:21:31.599 "peer_address": { 00:21:31.599 "trtype": "TCP", 00:21:31.599 "adrfam": "IPv4", 00:21:31.599 "traddr": "10.0.0.1", 00:21:31.599 "trsvcid": "34484" 00:21:31.599 }, 00:21:31.599 "auth": { 00:21:31.599 "state": "completed", 00:21:31.599 "digest": "sha512", 00:21:31.599 "dhgroup": "ffdhe3072" 00:21:31.599 } 00:21:31.599 } 00:21:31.599 ]' 00:21:31.599 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.599 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.599 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.599 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:31.599 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.599 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.599 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.599 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.858 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:21:31.858 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:21:32.791 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.791 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:21:32.791 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.791 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.791 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.791 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:32.791 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.791 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:32.791 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.048 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:33.048 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.048 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:33.048 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:33.048 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:33.048 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.048 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.048 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.048 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.048 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.048 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.048 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.049 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.613 00:21:33.613 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.613 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.613 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.870 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.870 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.870 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.870 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.870 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.870 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.870 { 00:21:33.870 "cntlid": 121, 00:21:33.870 "qid": 0, 00:21:33.870 "state": "enabled", 00:21:33.870 "thread": "nvmf_tgt_poll_group_000", 00:21:33.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:21:33.870 "listen_address": { 00:21:33.870 "trtype": "TCP", 00:21:33.870 "adrfam": "IPv4", 00:21:33.870 "traddr": "10.0.0.3", 00:21:33.870 "trsvcid": "4420" 00:21:33.870 }, 00:21:33.870 "peer_address": { 00:21:33.870 "trtype": "TCP", 00:21:33.870 "adrfam": "IPv4", 00:21:33.870 "traddr": "10.0.0.1", 00:21:33.870 "trsvcid": "34518" 00:21:33.870 }, 00:21:33.870 "auth": { 00:21:33.870 "state": "completed", 00:21:33.870 "digest": "sha512", 00:21:33.870 "dhgroup": "ffdhe4096" 00:21:33.870 } 00:21:33.870 } 00:21:33.870 ]' 00:21:33.870 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.870 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.870 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.870 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:33.870 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.870 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.870 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.870 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.129 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:21:34.129 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:21:35.064 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.064 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:21:35.064 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.064 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.064 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.064 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.064 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:35.064 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:35.321 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:35.321 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.321 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:35.321 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:35.321 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:35.321 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.321 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.321 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.321 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.321 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.321 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.321 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.321 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.580 00:21:35.580 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.580 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.580 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.838 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.838 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.838 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.838 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.096 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.096 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.096 { 00:21:36.096 "cntlid": 123, 00:21:36.096 "qid": 0, 00:21:36.096 "state": "enabled", 00:21:36.096 "thread": "nvmf_tgt_poll_group_000", 00:21:36.096 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:21:36.096 "listen_address": { 00:21:36.096 "trtype": "TCP", 00:21:36.096 "adrfam": "IPv4", 00:21:36.096 "traddr": "10.0.0.3", 00:21:36.096 "trsvcid": "4420" 00:21:36.096 }, 00:21:36.096 "peer_address": { 00:21:36.096 "trtype": "TCP", 00:21:36.096 "adrfam": "IPv4", 00:21:36.096 "traddr": "10.0.0.1", 00:21:36.096 "trsvcid": "34556" 00:21:36.096 }, 00:21:36.096 "auth": { 00:21:36.096 "state": "completed", 00:21:36.096 "digest": "sha512", 00:21:36.096 "dhgroup": "ffdhe4096" 00:21:36.096 } 00:21:36.096 } 00:21:36.096 ]' 00:21:36.096 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.096 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.096 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.096 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:36.096 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.096 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.096 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.096 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.353 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:21:36.353 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:21:37.295 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.295 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:21:37.295 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.295 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.295 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.295 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.295 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:37.295 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:37.553 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:37.553 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.553 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:37.553 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:37.553 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:37.553 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.553 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.553 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.553 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.553 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.553 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.553 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.553 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.811 00:21:37.811 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.811 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.811 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.069 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.327 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.327 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.327 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.327 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.327 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.327 { 00:21:38.327 "cntlid": 125, 00:21:38.327 "qid": 0, 00:21:38.327 "state": "enabled", 00:21:38.327 "thread": "nvmf_tgt_poll_group_000", 00:21:38.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:21:38.327 "listen_address": { 00:21:38.327 "trtype": "TCP", 00:21:38.327 "adrfam": "IPv4", 00:21:38.327 "traddr": "10.0.0.3", 00:21:38.327 "trsvcid": "4420" 00:21:38.327 }, 00:21:38.327 "peer_address": { 00:21:38.327 "trtype": "TCP", 00:21:38.327 "adrfam": "IPv4", 00:21:38.327 "traddr": "10.0.0.1", 00:21:38.327 "trsvcid": "39544" 00:21:38.327 }, 00:21:38.327 "auth": { 00:21:38.327 "state": "completed", 00:21:38.327 "digest": "sha512", 00:21:38.327 "dhgroup": "ffdhe4096" 00:21:38.327 } 00:21:38.327 } 00:21:38.327 ]' 00:21:38.327 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.327 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.327 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.327 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:38.327 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.327 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.327 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.327 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.586 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:21:38.586 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:21:39.520 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.520 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:21:39.520 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.520 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.520 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.520 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.520 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:39.520 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:39.778 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:39.778 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.778 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:39.778 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:39.778 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:39.778 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.778 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key3 00:21:39.778 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.778 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.778 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.778 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:39.778 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:39.778 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.036 00:21:40.036 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.036 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.036 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.602 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.602 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.602 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.602 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.602 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.602 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.602 { 00:21:40.602 "cntlid": 127, 00:21:40.602 "qid": 0, 00:21:40.602 "state": "enabled", 00:21:40.602 "thread": "nvmf_tgt_poll_group_000", 00:21:40.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:21:40.602 "listen_address": { 00:21:40.602 "trtype": "TCP", 00:21:40.602 "adrfam": "IPv4", 00:21:40.602 "traddr": "10.0.0.3", 00:21:40.602 "trsvcid": "4420" 00:21:40.602 }, 00:21:40.602 "peer_address": { 00:21:40.602 "trtype": "TCP", 00:21:40.602 "adrfam": "IPv4", 00:21:40.602 "traddr": "10.0.0.1", 00:21:40.602 "trsvcid": "39568" 00:21:40.602 }, 00:21:40.602 "auth": { 00:21:40.602 "state": "completed", 00:21:40.602 "digest": "sha512", 00:21:40.602 "dhgroup": "ffdhe4096" 00:21:40.602 } 00:21:40.602 } 00:21:40.602 ]' 00:21:40.602 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.602 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.602 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.602 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:40.602 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.602 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.602 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.602 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.861 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:21:40.861 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:21:41.798 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.798 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:21:41.798 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.798 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.798 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.798 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:41.798 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.798 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:41.798 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:42.057 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:42.057 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.057 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:42.057 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:42.057 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:42.057 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.057 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.057 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.057 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.057 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.057 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.057 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.057 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.625 00:21:42.625 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.625 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.625 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.884 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.884 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.884 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.884 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.884 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.884 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.884 { 00:21:42.884 "cntlid": 129, 00:21:42.884 "qid": 0, 00:21:42.884 "state": "enabled", 00:21:42.884 "thread": "nvmf_tgt_poll_group_000", 00:21:42.884 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:21:42.884 "listen_address": { 00:21:42.884 "trtype": "TCP", 00:21:42.884 "adrfam": "IPv4", 00:21:42.884 "traddr": "10.0.0.3", 00:21:42.884 "trsvcid": "4420" 00:21:42.884 }, 00:21:42.884 "peer_address": { 00:21:42.884 "trtype": "TCP", 00:21:42.884 "adrfam": "IPv4", 00:21:42.884 "traddr": "10.0.0.1", 00:21:42.884 "trsvcid": "39606" 00:21:42.884 }, 00:21:42.884 "auth": { 00:21:42.884 "state": "completed", 00:21:42.884 "digest": "sha512", 00:21:42.884 "dhgroup": "ffdhe6144" 00:21:42.884 } 00:21:42.884 } 00:21:42.884 ]' 00:21:42.884 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.884 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.884 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.884 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:42.884 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.884 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.884 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.884 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.451 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:21:43.451 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:21:44.018 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.018 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:21:44.018 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.018 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.018 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.018 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.018 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:44.018 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:44.276 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:44.276 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.276 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:44.276 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:44.276 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:44.276 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.276 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.276 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.276 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.276 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.276 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.276 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.276 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.843 00:21:44.843 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.843 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.843 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.145 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.145 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.145 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.145 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.145 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.145 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.145 { 00:21:45.145 "cntlid": 131, 00:21:45.145 "qid": 0, 00:21:45.145 "state": "enabled", 00:21:45.145 "thread": "nvmf_tgt_poll_group_000", 00:21:45.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:21:45.145 "listen_address": { 00:21:45.145 "trtype": "TCP", 00:21:45.145 "adrfam": "IPv4", 00:21:45.145 "traddr": "10.0.0.3", 00:21:45.145 "trsvcid": "4420" 00:21:45.145 }, 00:21:45.145 "peer_address": { 00:21:45.145 "trtype": "TCP", 00:21:45.145 "adrfam": "IPv4", 00:21:45.145 "traddr": "10.0.0.1", 00:21:45.145 "trsvcid": "39630" 00:21:45.145 }, 00:21:45.145 "auth": { 00:21:45.145 "state": "completed", 00:21:45.145 "digest": "sha512", 00:21:45.145 "dhgroup": "ffdhe6144" 00:21:45.145 } 00:21:45.145 } 00:21:45.145 ]' 00:21:45.145 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.145 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.145 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.145 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:45.145 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.145 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.145 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.145 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.711 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:21:45.711 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:21:46.277 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.277 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:21:46.277 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.277 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.277 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.277 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.277 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:46.277 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:46.534 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:46.534 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.534 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:46.534 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:46.534 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:46.534 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.534 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.534 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.534 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.534 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.534 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.534 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.534 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.100 00:21:47.100 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.100 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.100 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.358 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.358 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.358 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.358 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.358 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.358 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.358 { 00:21:47.358 "cntlid": 133, 00:21:47.358 "qid": 0, 00:21:47.358 "state": "enabled", 00:21:47.358 "thread": "nvmf_tgt_poll_group_000", 00:21:47.358 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:21:47.358 "listen_address": { 00:21:47.358 "trtype": "TCP", 00:21:47.358 "adrfam": "IPv4", 00:21:47.358 "traddr": "10.0.0.3", 00:21:47.358 "trsvcid": "4420" 00:21:47.358 }, 00:21:47.358 "peer_address": { 00:21:47.358 "trtype": "TCP", 00:21:47.358 "adrfam": "IPv4", 00:21:47.358 "traddr": "10.0.0.1", 00:21:47.358 "trsvcid": "39666" 00:21:47.358 }, 00:21:47.358 "auth": { 00:21:47.358 "state": "completed", 00:21:47.358 "digest": "sha512", 00:21:47.358 "dhgroup": "ffdhe6144" 00:21:47.358 } 00:21:47.358 } 00:21:47.358 ]' 00:21:47.358 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.616 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.616 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.616 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:47.616 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.616 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.616 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.616 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.875 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:21:47.875 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:21:48.844 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.844 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:21:48.844 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.844 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.844 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.844 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.844 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:48.844 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:48.844 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:48.844 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.844 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:48.844 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:48.844 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:48.844 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.844 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key3 00:21:48.844 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.844 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.844 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.844 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:48.844 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:48.844 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:49.411 00:21:49.411 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.411 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.411 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.978 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.978 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.978 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.978 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.978 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.978 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.978 { 00:21:49.978 "cntlid": 135, 00:21:49.978 "qid": 0, 00:21:49.978 "state": "enabled", 00:21:49.978 "thread": "nvmf_tgt_poll_group_000", 00:21:49.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:21:49.978 "listen_address": { 00:21:49.978 "trtype": "TCP", 00:21:49.978 "adrfam": "IPv4", 00:21:49.978 "traddr": "10.0.0.3", 00:21:49.978 "trsvcid": "4420" 00:21:49.978 }, 00:21:49.978 "peer_address": { 00:21:49.978 "trtype": "TCP", 00:21:49.978 "adrfam": "IPv4", 00:21:49.978 "traddr": "10.0.0.1", 00:21:49.978 "trsvcid": "55294" 00:21:49.978 }, 00:21:49.978 "auth": { 00:21:49.978 "state": "completed", 00:21:49.978 "digest": "sha512", 00:21:49.978 "dhgroup": "ffdhe6144" 00:21:49.978 } 00:21:49.978 } 00:21:49.978 ]' 00:21:49.978 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.978 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.978 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.978 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:49.978 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.978 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.978 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.978 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.236 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:21:50.236 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:21:51.204 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.204 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:21:51.204 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.204 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.204 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.204 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:51.204 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.204 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:51.204 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:51.204 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:51.204 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.204 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:51.204 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:51.204 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:51.204 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.204 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.204 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.204 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.204 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.204 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.204 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.204 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.772 00:21:52.030 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.030 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.030 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.288 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.288 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.288 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.288 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.288 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.288 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.288 { 00:21:52.288 "cntlid": 137, 00:21:52.288 "qid": 0, 00:21:52.288 "state": "enabled", 00:21:52.288 "thread": "nvmf_tgt_poll_group_000", 00:21:52.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:21:52.288 "listen_address": { 00:21:52.288 "trtype": "TCP", 00:21:52.288 "adrfam": "IPv4", 00:21:52.289 "traddr": "10.0.0.3", 00:21:52.289 "trsvcid": "4420" 00:21:52.289 }, 00:21:52.289 "peer_address": { 00:21:52.289 "trtype": "TCP", 00:21:52.289 "adrfam": "IPv4", 00:21:52.289 "traddr": "10.0.0.1", 00:21:52.289 "trsvcid": "55322" 00:21:52.289 }, 00:21:52.289 "auth": { 00:21:52.289 "state": "completed", 00:21:52.289 "digest": "sha512", 00:21:52.289 "dhgroup": "ffdhe8192" 00:21:52.289 } 00:21:52.289 } 00:21:52.289 ]' 00:21:52.289 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.289 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.289 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.289 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:52.289 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.547 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.547 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.547 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.805 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:21:52.805 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:21:53.369 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.369 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:21:53.369 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.369 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.369 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.369 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.369 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:53.369 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:53.936 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:53.936 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.936 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:53.936 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:53.936 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:53.936 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.936 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.936 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.936 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.936 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.936 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.936 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.936 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.526 00:21:54.526 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.526 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.526 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.784 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.784 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.784 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.784 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.784 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.784 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.784 { 00:21:54.784 "cntlid": 139, 00:21:54.784 "qid": 0, 00:21:54.784 "state": "enabled", 00:21:54.784 "thread": "nvmf_tgt_poll_group_000", 00:21:54.784 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:21:54.784 "listen_address": { 00:21:54.784 "trtype": "TCP", 00:21:54.784 "adrfam": "IPv4", 00:21:54.784 "traddr": "10.0.0.3", 00:21:54.784 "trsvcid": "4420" 00:21:54.785 }, 00:21:54.785 "peer_address": { 00:21:54.785 "trtype": "TCP", 00:21:54.785 "adrfam": "IPv4", 00:21:54.785 "traddr": "10.0.0.1", 00:21:54.785 "trsvcid": "55346" 00:21:54.785 }, 00:21:54.785 "auth": { 00:21:54.785 "state": "completed", 00:21:54.785 "digest": "sha512", 00:21:54.785 "dhgroup": "ffdhe8192" 00:21:54.785 } 00:21:54.785 } 00:21:54.785 ]' 00:21:54.785 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.043 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.043 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:55.043 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:55.043 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.043 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.043 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.043 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.301 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:21:55.301 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: --dhchap-ctrl-secret DHHC-1:02:MDU2YTFkNjdiOGM2M2M5YjQwMDcwYzE3YmQwYzlmYWM5ZmM4ODNhZDlkYjllZjI5gRyGiA==: 00:21:56.235 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.235 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:21:56.235 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.235 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.235 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.235 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:56.235 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:56.235 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:56.494 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:56.494 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:56.494 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:56.494 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:56.494 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:56.494 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.494 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.494 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.494 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.494 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.494 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.494 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.494 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.093 00:21:57.093 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.093 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.093 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.359 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.359 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.359 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.359 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.359 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.359 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.359 { 00:21:57.359 "cntlid": 141, 00:21:57.359 "qid": 0, 00:21:57.359 "state": "enabled", 00:21:57.359 "thread": "nvmf_tgt_poll_group_000", 00:21:57.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:21:57.359 "listen_address": { 00:21:57.359 "trtype": "TCP", 00:21:57.359 "adrfam": "IPv4", 00:21:57.359 "traddr": "10.0.0.3", 00:21:57.359 "trsvcid": "4420" 00:21:57.359 }, 00:21:57.359 "peer_address": { 00:21:57.359 "trtype": "TCP", 00:21:57.359 "adrfam": "IPv4", 00:21:57.359 "traddr": "10.0.0.1", 00:21:57.359 "trsvcid": "55360" 00:21:57.359 }, 00:21:57.359 "auth": { 00:21:57.359 "state": "completed", 00:21:57.359 "digest": "sha512", 00:21:57.359 "dhgroup": "ffdhe8192" 00:21:57.359 } 00:21:57.359 } 00:21:57.359 ]' 00:21:57.359 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.359 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.359 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.618 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:57.618 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.618 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.618 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.618 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.876 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:21:57.876 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:01:ODVkZWEwY2M2NmI1ZDc3MTAyNTdmOGE5MzI0YzJhNTI62n9X: 00:21:58.807 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.808 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:21:58.808 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.808 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.808 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.808 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:58.808 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:58.808 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:59.064 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:59.064 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.064 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:59.064 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:59.064 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:59.064 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.064 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key3 00:21:59.064 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.064 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.064 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.064 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:59.064 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:59.064 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:59.628 00:21:59.628 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.628 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.628 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.888 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.888 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.888 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.888 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.888 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.888 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.888 { 00:21:59.888 "cntlid": 143, 00:21:59.888 "qid": 0, 00:21:59.888 "state": "enabled", 00:21:59.888 "thread": "nvmf_tgt_poll_group_000", 00:21:59.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:21:59.888 "listen_address": { 00:21:59.888 "trtype": "TCP", 00:21:59.888 "adrfam": "IPv4", 00:21:59.888 "traddr": "10.0.0.3", 00:21:59.888 "trsvcid": "4420" 00:21:59.888 }, 00:21:59.888 "peer_address": { 00:21:59.888 "trtype": "TCP", 00:21:59.888 "adrfam": "IPv4", 00:21:59.888 "traddr": "10.0.0.1", 00:21:59.888 "trsvcid": "39708" 00:21:59.888 }, 00:21:59.888 "auth": { 00:21:59.888 "state": "completed", 00:21:59.888 "digest": "sha512", 00:21:59.888 "dhgroup": "ffdhe8192" 00:21:59.888 } 00:21:59.888 } 00:21:59.888 ]' 00:21:59.888 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.146 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.146 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.146 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:00.146 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.146 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.146 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.146 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.404 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:22:00.404 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:22:01.342 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.342 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:22:01.342 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.342 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.342 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.342 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:01.342 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:01.342 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:01.342 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:01.342 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:01.342 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:01.601 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:01.601 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:01.601 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:01.601 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:01.601 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:01.601 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.601 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.601 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.601 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.601 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.601 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.601 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.601 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.535 00:22:02.535 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.535 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.535 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.793 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.793 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.793 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.793 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.793 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.793 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:02.793 { 00:22:02.793 "cntlid": 145, 00:22:02.793 "qid": 0, 00:22:02.793 "state": "enabled", 00:22:02.793 "thread": "nvmf_tgt_poll_group_000", 00:22:02.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:22:02.793 "listen_address": { 00:22:02.793 "trtype": "TCP", 00:22:02.793 "adrfam": "IPv4", 00:22:02.793 "traddr": "10.0.0.3", 00:22:02.793 "trsvcid": "4420" 00:22:02.793 }, 00:22:02.793 "peer_address": { 00:22:02.793 "trtype": "TCP", 00:22:02.793 "adrfam": "IPv4", 00:22:02.793 "traddr": "10.0.0.1", 00:22:02.793 "trsvcid": "39722" 00:22:02.793 }, 00:22:02.793 "auth": { 00:22:02.793 "state": "completed", 00:22:02.793 "digest": "sha512", 00:22:02.793 "dhgroup": "ffdhe8192" 00:22:02.793 } 00:22:02.793 } 00:22:02.793 ]' 00:22:02.793 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:02.793 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.793 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.793 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:02.793 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.793 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.793 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.793 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.050 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:22:03.051 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:00:OTkzMWMxMzAxM2Q3M2YzOTZhMzc5ZmEyYmZkNWRjMjJlMDllM2FhNDBjYWMzOTc1kUIasg==: --dhchap-ctrl-secret DHHC-1:03:OWU0YzZkMWNiMDdkYTljN2ZiZGMyMDYxMTM1ODFhNWNlODBlYzY1NjkwMDNhOTFiMWQ4MzRkN2YzMzZiNjUyZu3TXg8=: 00:22:03.985 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.985 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:22:03.985 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.985 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.985 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.985 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key1 00:22:03.985 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.985 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.985 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.985 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:03.985 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:03.985 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:03.985 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:03.985 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:03.985 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:03.985 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:03.985 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:03.985 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:03.985 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:04.552 request: 00:22:04.552 { 00:22:04.552 "name": "nvme0", 00:22:04.552 "trtype": "tcp", 00:22:04.552 "traddr": "10.0.0.3", 00:22:04.552 "adrfam": "ipv4", 00:22:04.552 "trsvcid": "4420", 00:22:04.552 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:04.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:22:04.552 "prchk_reftag": false, 00:22:04.552 "prchk_guard": false, 00:22:04.552 "hdgst": false, 00:22:04.552 "ddgst": false, 00:22:04.552 "dhchap_key": "key2", 00:22:04.552 "allow_unrecognized_csi": false, 00:22:04.552 "method": "bdev_nvme_attach_controller", 00:22:04.552 "req_id": 1 00:22:04.552 } 00:22:04.552 Got JSON-RPC error response 00:22:04.552 response: 00:22:04.552 { 00:22:04.552 "code": -5, 00:22:04.552 "message": "Input/output error" 00:22:04.552 } 00:22:04.552 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:04.552 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:04.552 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:04.552 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:04.552 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:22:04.552 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.552 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.552 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.552 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.552 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.552 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.552 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.552 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:04.552 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:04.552 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:04.552 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:04.552 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:04.552 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:04.552 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:04.552 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:04.552 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:04.552 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:05.119 request: 00:22:05.119 { 00:22:05.119 "name": "nvme0", 00:22:05.119 "trtype": "tcp", 00:22:05.119 "traddr": "10.0.0.3", 00:22:05.119 "adrfam": "ipv4", 00:22:05.119 "trsvcid": "4420", 00:22:05.119 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:05.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:22:05.119 "prchk_reftag": false, 00:22:05.119 "prchk_guard": false, 00:22:05.119 "hdgst": false, 00:22:05.119 "ddgst": false, 00:22:05.119 "dhchap_key": "key1", 00:22:05.119 "dhchap_ctrlr_key": "ckey2", 00:22:05.119 "allow_unrecognized_csi": false, 00:22:05.119 "method": "bdev_nvme_attach_controller", 00:22:05.119 "req_id": 1 00:22:05.119 } 00:22:05.119 Got JSON-RPC error response 00:22:05.119 response: 00:22:05.119 { 00:22:05.119 "code": -5, 00:22:05.119 "message": "Input/output error" 00:22:05.119 } 00:22:05.119 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:05.119 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:05.119 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:05.119 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:05.119 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:22:05.120 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.120 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.120 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.120 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key1 00:22:05.120 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.120 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.120 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.120 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.120 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:05.120 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.120 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:05.120 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.120 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:05.120 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.120 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.120 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.120 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.055 request: 00:22:06.055 { 00:22:06.055 "name": "nvme0", 00:22:06.055 "trtype": "tcp", 00:22:06.055 "traddr": "10.0.0.3", 00:22:06.055 "adrfam": "ipv4", 00:22:06.055 "trsvcid": "4420", 00:22:06.055 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:06.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:22:06.055 "prchk_reftag": false, 00:22:06.055 "prchk_guard": false, 00:22:06.055 "hdgst": false, 00:22:06.055 "ddgst": false, 00:22:06.055 "dhchap_key": "key1", 00:22:06.055 "dhchap_ctrlr_key": "ckey1", 00:22:06.055 "allow_unrecognized_csi": false, 00:22:06.055 "method": "bdev_nvme_attach_controller", 00:22:06.055 "req_id": 1 00:22:06.055 } 00:22:06.055 Got JSON-RPC error response 00:22:06.055 response: 00:22:06.055 { 00:22:06.055 "code": -5, 00:22:06.055 "message": "Input/output error" 00:22:06.055 } 00:22:06.055 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:06.055 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:06.055 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:06.055 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:06.055 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:22:06.055 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.055 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.055 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.055 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67423 00:22:06.055 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 67423 ']' 00:22:06.055 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 67423 00:22:06.055 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:06.055 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:06.055 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67423 00:22:06.055 killing process with pid 67423 00:22:06.055 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:06.055 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:06.055 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67423' 00:22:06.055 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 67423 00:22:06.055 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 67423 00:22:06.055 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:06.055 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:06.055 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:06.055 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.313 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=70634 00:22:06.313 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 70634 00:22:06.313 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:06.313 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 70634 ']' 00:22:06.313 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.313 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:06.313 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.313 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:06.313 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.570 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:06.570 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:06.570 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:06.570 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:06.570 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.570 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.570 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:06.570 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70634 00:22:06.570 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 70634 ']' 00:22:06.570 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.570 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:06.570 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.570 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:06.570 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.828 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:06.828 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:06.828 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:06.828 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.828 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.085 null0 00:22:07.085 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.085 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Nik 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.twe ]] 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.twe 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.c3C 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.q9Q ]] 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.q9Q 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.fRb 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.FBe ]] 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.FBe 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.tNd 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key3 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:07.086 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:08.457 nvme0n1 00:22:08.457 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:08.457 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:08.457 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.715 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.715 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.715 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.715 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.715 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.715 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.715 { 00:22:08.715 "cntlid": 1, 00:22:08.715 "qid": 0, 00:22:08.715 "state": "enabled", 00:22:08.715 "thread": "nvmf_tgt_poll_group_000", 00:22:08.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:22:08.715 "listen_address": { 00:22:08.715 "trtype": "TCP", 00:22:08.715 "adrfam": "IPv4", 00:22:08.715 "traddr": "10.0.0.3", 00:22:08.715 "trsvcid": "4420" 00:22:08.715 }, 00:22:08.715 "peer_address": { 00:22:08.715 "trtype": "TCP", 00:22:08.715 "adrfam": "IPv4", 00:22:08.715 "traddr": "10.0.0.1", 00:22:08.715 "trsvcid": "39778" 00:22:08.715 }, 00:22:08.715 "auth": { 00:22:08.715 "state": "completed", 00:22:08.715 "digest": "sha512", 00:22:08.715 "dhgroup": "ffdhe8192" 00:22:08.715 } 00:22:08.715 } 00:22:08.715 ]' 00:22:08.715 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:08.715 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.715 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:08.715 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:08.715 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:08.715 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.715 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.715 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.281 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:22:09.281 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:22:09.845 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.845 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:22:09.845 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.845 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.845 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.845 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key3 00:22:09.845 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.845 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.845 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.845 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:09.845 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:10.417 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:10.417 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:10.417 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:10.417 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:10.417 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:10.417 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:10.417 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:10.417 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:10.417 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:10.417 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:10.676 request: 00:22:10.676 { 00:22:10.676 "name": "nvme0", 00:22:10.676 "trtype": "tcp", 00:22:10.676 "traddr": "10.0.0.3", 00:22:10.676 "adrfam": "ipv4", 00:22:10.676 "trsvcid": "4420", 00:22:10.676 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:10.676 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:22:10.676 "prchk_reftag": false, 00:22:10.676 "prchk_guard": false, 00:22:10.676 "hdgst": false, 00:22:10.676 "ddgst": false, 00:22:10.676 "dhchap_key": "key3", 00:22:10.676 "allow_unrecognized_csi": false, 00:22:10.676 "method": "bdev_nvme_attach_controller", 00:22:10.676 "req_id": 1 00:22:10.676 } 00:22:10.676 Got JSON-RPC error response 00:22:10.676 response: 00:22:10.676 { 00:22:10.676 "code": -5, 00:22:10.676 "message": "Input/output error" 00:22:10.676 } 00:22:10.676 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:10.676 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:10.676 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:10.676 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:10.676 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:10.676 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:10.676 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:10.676 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:10.934 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:10.934 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:10.934 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:10.934 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:10.934 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:10.934 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:10.934 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:10.934 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:10.934 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:10.934 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:11.192 request: 00:22:11.192 { 00:22:11.192 "name": "nvme0", 00:22:11.192 "trtype": "tcp", 00:22:11.192 "traddr": "10.0.0.3", 00:22:11.192 "adrfam": "ipv4", 00:22:11.192 "trsvcid": "4420", 00:22:11.192 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:11.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:22:11.192 "prchk_reftag": false, 00:22:11.192 "prchk_guard": false, 00:22:11.192 "hdgst": false, 00:22:11.192 "ddgst": false, 00:22:11.192 "dhchap_key": "key3", 00:22:11.192 "allow_unrecognized_csi": false, 00:22:11.192 "method": "bdev_nvme_attach_controller", 00:22:11.192 "req_id": 1 00:22:11.192 } 00:22:11.192 Got JSON-RPC error response 00:22:11.192 response: 00:22:11.192 { 00:22:11.193 "code": -5, 00:22:11.193 "message": "Input/output error" 00:22:11.193 } 00:22:11.193 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:11.193 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:11.193 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:11.193 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:11.193 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:11.193 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:11.193 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:11.193 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:11.193 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:11.193 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:11.450 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:22:11.450 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.450 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.450 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.450 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:22:11.450 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.450 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.450 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.450 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:11.450 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:11.450 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:11.450 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:11.451 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:11.451 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:11.451 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:11.451 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:11.451 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:11.451 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:12.015 request: 00:22:12.015 { 00:22:12.015 "name": "nvme0", 00:22:12.015 "trtype": "tcp", 00:22:12.015 "traddr": "10.0.0.3", 00:22:12.015 "adrfam": "ipv4", 00:22:12.015 "trsvcid": "4420", 00:22:12.015 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:12.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:22:12.015 "prchk_reftag": false, 00:22:12.015 "prchk_guard": false, 00:22:12.015 "hdgst": false, 00:22:12.015 "ddgst": false, 00:22:12.015 "dhchap_key": "key0", 00:22:12.015 "dhchap_ctrlr_key": "key1", 00:22:12.015 "allow_unrecognized_csi": false, 00:22:12.015 "method": "bdev_nvme_attach_controller", 00:22:12.015 "req_id": 1 00:22:12.015 } 00:22:12.015 Got JSON-RPC error response 00:22:12.015 response: 00:22:12.015 { 00:22:12.015 "code": -5, 00:22:12.015 "message": "Input/output error" 00:22:12.015 } 00:22:12.015 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:12.015 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:12.015 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:12.015 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:12.015 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:12.015 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:12.015 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:12.272 nvme0n1 00:22:12.272 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:12.272 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.272 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:12.530 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.530 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.530 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.095 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key1 00:22:13.095 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.095 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.095 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.095 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:13.095 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:13.095 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:14.092 nvme0n1 00:22:14.092 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:14.092 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:14.092 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.349 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.349 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:14.349 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.349 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.349 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.349 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:14.349 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:14.349 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.607 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.607 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:22:14.607 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid cb4c864e-bb30-4900-8fc1-989c4e76fc1b -l 0 --dhchap-secret DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: --dhchap-ctrl-secret DHHC-1:03:NjhlMTU1NzYxZWMyNmYyYzBmYTA0NjM2MjVjYTY4YjVhOTMyYzBhMzk0OWRhMWZhN2ViZWZhOWE2MWEzNjFjMiusz0o=: 00:22:15.541 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:15.541 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:15.541 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:15.541 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:15.541 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:15.541 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:15.541 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:15.541 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.541 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.798 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:15.798 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:15.798 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:15.798 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:15.798 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:15.798 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:15.798 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:15.798 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:15.798 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:15.798 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:16.363 request: 00:22:16.363 { 00:22:16.363 "name": "nvme0", 00:22:16.363 "trtype": "tcp", 00:22:16.363 "traddr": "10.0.0.3", 00:22:16.363 "adrfam": "ipv4", 00:22:16.363 "trsvcid": "4420", 00:22:16.363 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:16.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b", 00:22:16.363 "prchk_reftag": false, 00:22:16.363 "prchk_guard": false, 00:22:16.363 "hdgst": false, 00:22:16.363 "ddgst": false, 00:22:16.363 "dhchap_key": "key1", 00:22:16.363 "allow_unrecognized_csi": false, 00:22:16.363 "method": "bdev_nvme_attach_controller", 00:22:16.363 "req_id": 1 00:22:16.363 } 00:22:16.363 Got JSON-RPC error response 00:22:16.363 response: 00:22:16.363 { 00:22:16.363 "code": -5, 00:22:16.363 "message": "Input/output error" 00:22:16.363 } 00:22:16.363 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:16.363 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:16.363 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:16.363 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:16.363 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:16.363 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:16.363 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:17.296 nvme0n1 00:22:17.296 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:17.296 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:17.296 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.555 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.555 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.555 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.119 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:22:18.119 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.119 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.119 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.119 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:18.119 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:18.119 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:18.377 nvme0n1 00:22:18.377 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:18.377 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:18.377 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.635 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.635 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.635 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.201 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:19.201 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.201 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.201 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.201 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: '' 2s 00:22:19.201 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:19.201 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:19.201 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: 00:22:19.201 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:19.201 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:19.201 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:19.201 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: ]] 00:22:19.201 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:Y2I5NGFhYmFkMTE5YTIyYTRiMTI0NDQ1OTNhMGU5YzbFv6vh: 00:22:19.201 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:19.201 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:19.201 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:21.100 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:21.100 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:22:21.100 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:21.100 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:22:21.100 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:22:21.100 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:21.100 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:22:21.100 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:21.100 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.100 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.100 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.100 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: 2s 00:22:21.100 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:21.100 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:21.100 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:21.100 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: 00:22:21.100 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:21.100 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:21.100 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:21.100 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: ]] 00:22:21.100 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YjFiNTVhNDIxOTVhNmYxZTA4MzFiZTFhODJlYWFhOGQyNWJiZmU5ZmQ0OGFiMzRlPXGhGQ==: 00:22:21.100 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:21.100 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:23.626 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:23.626 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:22:23.626 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:23.626 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:22:23.626 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:23.626 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:22:23.626 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:22:23.626 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.626 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:23.626 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.626 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.626 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.626 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:23.626 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:23.626 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:24.192 nvme0n1 00:22:24.192 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:24.192 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.192 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.192 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.192 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:24.192 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:25.126 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:25.126 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:25.126 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.384 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.384 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:22:25.384 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.384 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.384 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.384 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:25.384 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:25.642 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:25.642 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:25.642 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.900 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.900 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:25.900 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.900 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.900 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.900 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:25.900 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:25.900 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:25.900 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:26.158 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:26.158 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:26.158 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:26.158 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:26.158 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:26.724 request: 00:22:26.724 { 00:22:26.724 "name": "nvme0", 00:22:26.724 "dhchap_key": "key1", 00:22:26.724 "dhchap_ctrlr_key": "key3", 00:22:26.724 "method": "bdev_nvme_set_keys", 00:22:26.724 "req_id": 1 00:22:26.724 } 00:22:26.724 Got JSON-RPC error response 00:22:26.724 response: 00:22:26.724 { 00:22:26.724 "code": -13, 00:22:26.724 "message": "Permission denied" 00:22:26.724 } 00:22:26.724 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:26.724 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:26.724 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:26.724 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:26.724 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:26.724 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.724 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:26.982 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:26.982 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:28.397 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:28.397 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:28.397 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.397 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:28.397 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:28.397 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.397 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.397 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.397 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:28.397 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:28.397 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:29.332 nvme0n1 00:22:29.332 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:29.332 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.332 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.590 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.590 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:29.590 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:29.590 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:29.590 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:29.590 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:29.590 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:29.590 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:29.590 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:29.590 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:30.157 request: 00:22:30.157 { 00:22:30.157 "name": "nvme0", 00:22:30.157 "dhchap_key": "key2", 00:22:30.157 "dhchap_ctrlr_key": "key0", 00:22:30.157 "method": "bdev_nvme_set_keys", 00:22:30.157 "req_id": 1 00:22:30.157 } 00:22:30.157 Got JSON-RPC error response 00:22:30.157 response: 00:22:30.157 { 00:22:30.157 "code": -13, 00:22:30.157 "message": "Permission denied" 00:22:30.157 } 00:22:30.157 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:30.157 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:30.157 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:30.157 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:30.157 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:30.157 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:30.157 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.414 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:30.414 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:31.414 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:31.414 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:31.414 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.672 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:31.672 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:31.672 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:31.672 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67455 00:22:31.672 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 67455 ']' 00:22:31.672 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 67455 00:22:31.672 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:31.672 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:31.672 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67455 00:22:31.672 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:31.672 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:31.672 killing process with pid 67455 00:22:31.672 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67455' 00:22:31.672 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 67455 00:22:31.672 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 67455 00:22:32.265 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:32.265 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:32.265 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:32.265 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:32.265 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:32.265 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:32.265 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:32.265 rmmod nvme_tcp 00:22:32.265 rmmod nvme_fabrics 00:22:32.265 rmmod nvme_keyring 00:22:32.265 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:32.524 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:32.524 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:32.524 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 70634 ']' 00:22:32.524 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 70634 00:22:32.524 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 70634 ']' 00:22:32.524 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 70634 00:22:32.524 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:32.524 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:32.524 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70634 00:22:32.524 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:32.524 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:32.524 killing process with pid 70634 00:22:32.524 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70634' 00:22:32.524 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 70634 00:22:32.525 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 70634 00:22:32.525 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:32.525 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:32.525 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:32.525 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:32.525 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:22:32.525 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:32.525 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:22:32.525 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:32.525 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:32.525 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:32.783 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:32.783 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:32.783 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:32.783 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:32.783 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:32.783 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:32.783 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:32.783 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:32.783 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:32.783 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:32.783 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:32.783 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:32.783 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:32.783 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.783 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.783 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.783 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:22:32.783 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Nik /tmp/spdk.key-sha256.c3C /tmp/spdk.key-sha384.fRb /tmp/spdk.key-sha512.tNd /tmp/spdk.key-sha512.twe /tmp/spdk.key-sha384.q9Q /tmp/spdk.key-sha256.FBe '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:22:32.783 00:22:32.783 real 3m25.895s 00:22:32.783 user 8m12.456s 00:22:32.783 sys 0m32.072s 00:22:32.783 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:32.783 ************************************ 00:22:32.783 END TEST nvmf_auth_target 00:22:32.783 ************************************ 00:22:32.783 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.041 19:24:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:33.041 19:24:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:33.041 19:24:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:33.041 19:24:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:33.041 19:24:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:33.041 ************************************ 00:22:33.041 START TEST nvmf_bdevio_no_huge 00:22:33.041 ************************************ 00:22:33.041 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:33.041 * Looking for test storage... 00:22:33.041 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:33.041 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:33.041 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:22:33.041 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:33.041 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:33.041 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:33.041 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:33.041 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:33.041 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:33.041 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:33.041 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:33.041 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:33.041 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:33.041 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:33.041 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:33.041 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:33.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.042 --rc genhtml_branch_coverage=1 00:22:33.042 --rc genhtml_function_coverage=1 00:22:33.042 --rc genhtml_legend=1 00:22:33.042 --rc geninfo_all_blocks=1 00:22:33.042 --rc geninfo_unexecuted_blocks=1 00:22:33.042 00:22:33.042 ' 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:33.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.042 --rc genhtml_branch_coverage=1 00:22:33.042 --rc genhtml_function_coverage=1 00:22:33.042 --rc genhtml_legend=1 00:22:33.042 --rc geninfo_all_blocks=1 00:22:33.042 --rc geninfo_unexecuted_blocks=1 00:22:33.042 00:22:33.042 ' 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:33.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.042 --rc genhtml_branch_coverage=1 00:22:33.042 --rc genhtml_function_coverage=1 00:22:33.042 --rc genhtml_legend=1 00:22:33.042 --rc geninfo_all_blocks=1 00:22:33.042 --rc geninfo_unexecuted_blocks=1 00:22:33.042 00:22:33.042 ' 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:33.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.042 --rc genhtml_branch_coverage=1 00:22:33.042 --rc genhtml_function_coverage=1 00:22:33.042 --rc genhtml_legend=1 00:22:33.042 --rc geninfo_all_blocks=1 00:22:33.042 --rc geninfo_unexecuted_blocks=1 00:22:33.042 00:22:33.042 ' 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:33.042 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:33.301 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:22:33.301 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:22:33.301 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:33.301 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:33.301 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:33.301 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:33.301 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:33.301 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:33.301 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.301 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.301 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.301 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.301 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.301 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.301 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:33.302 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@458 -- # nvmf_veth_init 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:33.302 Cannot find device "nvmf_init_br" 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:33.302 Cannot find device "nvmf_init_br2" 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:33.302 Cannot find device "nvmf_tgt_br" 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:33.302 Cannot find device "nvmf_tgt_br2" 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:33.302 Cannot find device "nvmf_init_br" 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:33.302 Cannot find device "nvmf_init_br2" 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:33.302 Cannot find device "nvmf_tgt_br" 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:33.302 Cannot find device "nvmf_tgt_br2" 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:33.302 Cannot find device "nvmf_br" 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:33.302 Cannot find device "nvmf_init_if" 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:33.302 Cannot find device "nvmf_init_if2" 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:33.302 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:33.302 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:33.302 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:33.561 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:33.561 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:22:33.561 00:22:33.561 --- 10.0.0.3 ping statistics --- 00:22:33.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.561 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:33.561 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:33.561 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.113 ms 00:22:33.561 00:22:33.561 --- 10.0.0.4 ping statistics --- 00:22:33.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.561 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:33.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:22:33.561 00:22:33.561 --- 10.0.0.1 ping statistics --- 00:22:33.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.561 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:33.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:22:33.561 00:22:33.561 --- 10.0.0.2 ping statistics --- 00:22:33.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.561 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # return 0 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=71292 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 71292 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 71292 ']' 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:33.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.561 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:33.562 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.562 [2024-10-17 19:24:42.800466] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:22:33.562 [2024-10-17 19:24:42.800610] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:33.818 [2024-10-17 19:24:42.952675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:33.818 [2024-10-17 19:24:43.042560] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.818 [2024-10-17 19:24:43.042656] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.818 [2024-10-17 19:24:43.042672] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.818 [2024-10-17 19:24:43.042682] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.818 [2024-10-17 19:24:43.042691] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.818 [2024-10-17 19:24:43.043759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:33.818 [2024-10-17 19:24:43.043909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:33.818 [2024-10-17 19:24:43.044060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:33.819 [2024-10-17 19:24:43.044353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:33.819 [2024-10-17 19:24:43.050671] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:34.763 [2024-10-17 19:24:43.923008] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:34.763 Malloc0 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:34.763 [2024-10-17 19:24:43.965351] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:34.763 { 00:22:34.763 "params": { 00:22:34.763 "name": "Nvme$subsystem", 00:22:34.763 "trtype": "$TEST_TRANSPORT", 00:22:34.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.763 "adrfam": "ipv4", 00:22:34.763 "trsvcid": "$NVMF_PORT", 00:22:34.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.763 "hdgst": ${hdgst:-false}, 00:22:34.763 "ddgst": ${ddgst:-false} 00:22:34.763 }, 00:22:34.763 "method": "bdev_nvme_attach_controller" 00:22:34.763 } 00:22:34.763 EOF 00:22:34.763 )") 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:22:34.763 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:34.763 "params": { 00:22:34.763 "name": "Nvme1", 00:22:34.763 "trtype": "tcp", 00:22:34.763 "traddr": "10.0.0.3", 00:22:34.763 "adrfam": "ipv4", 00:22:34.763 "trsvcid": "4420", 00:22:34.763 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:34.763 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:34.763 "hdgst": false, 00:22:34.763 "ddgst": false 00:22:34.763 }, 00:22:34.763 "method": "bdev_nvme_attach_controller" 00:22:34.763 }' 00:22:35.021 [2024-10-17 19:24:44.029295] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:22:35.021 [2024-10-17 19:24:44.029424] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71329 ] 00:22:35.021 [2024-10-17 19:24:44.177674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:35.021 [2024-10-17 19:24:44.272015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.021 [2024-10-17 19:24:44.272189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.021 [2024-10-17 19:24:44.272193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.278 [2024-10-17 19:24:44.286907] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:35.278 I/O targets: 00:22:35.278 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:35.279 00:22:35.279 00:22:35.279 CUnit - A unit testing framework for C - Version 2.1-3 00:22:35.279 http://cunit.sourceforge.net/ 00:22:35.279 00:22:35.279 00:22:35.279 Suite: bdevio tests on: Nvme1n1 00:22:35.279 Test: blockdev write read block ...passed 00:22:35.279 Test: blockdev write zeroes read block ...passed 00:22:35.279 Test: blockdev write zeroes read no split ...passed 00:22:35.537 Test: blockdev write zeroes read split ...passed 00:22:35.537 Test: blockdev write zeroes read split partial ...passed 00:22:35.537 Test: blockdev reset ...[2024-10-17 19:24:44.552923] nvme_ctrlr.c:1770:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:35.537 [2024-10-17 19:24:44.553031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2281720 (9): Bad file descriptor 00:22:35.537 [2024-10-17 19:24:44.565803] bdev_nvme.c:2215:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:35.537 passed 00:22:35.537 Test: blockdev write read 8 blocks ...passed 00:22:35.537 Test: blockdev write read size > 128k ...passed 00:22:35.537 Test: blockdev write read invalid size ...passed 00:22:35.537 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:35.537 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:35.537 Test: blockdev write read max offset ...passed 00:22:35.537 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:35.537 Test: blockdev writev readv 8 blocks ...passed 00:22:35.537 Test: blockdev writev readv 30 x 1block ...passed 00:22:35.537 Test: blockdev writev readv block ...passed 00:22:35.537 Test: blockdev writev readv size > 128k ...passed 00:22:35.537 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:35.537 Test: blockdev comparev and writev ...[2024-10-17 19:24:44.574587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:35.537 [2024-10-17 19:24:44.574638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.537 [2024-10-17 19:24:44.574663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:35.537 [2024-10-17 19:24:44.574677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:35.537 [2024-10-17 19:24:44.575181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:35.537 [2024-10-17 19:24:44.575218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:35.537 [2024-10-17 19:24:44.575241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:35.537 [2024-10-17 19:24:44.575253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:35.537 [2024-10-17 19:24:44.575627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:35.537 [2024-10-17 19:24:44.575662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:35.537 [2024-10-17 19:24:44.575685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:35.537 [2024-10-17 19:24:44.575698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:35.537 [2024-10-17 19:24:44.576185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:35.537 [2024-10-17 19:24:44.576220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:35.537 [2024-10-17 19:24:44.576242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:35.537 [2024-10-17 19:24:44.576254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:35.537 passed 00:22:35.537 Test: blockdev nvme passthru rw ...passed 00:22:35.537 Test: blockdev nvme passthru vendor specific ...[2024-10-17 19:24:44.577091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:35.537 [2024-10-17 19:24:44.577120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:35.537 [2024-10-17 19:24:44.577276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:35.537 [2024-10-17 19:24:44.577303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:35.537 [2024-10-17 19:24:44.577425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:35.537 [2024-10-17 19:24:44.577458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:35.537 [2024-10-17 19:24:44.577583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:35.537 [2024-10-17 19:24:44.577615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:35.537 passed 00:22:35.537 Test: blockdev nvme admin passthru ...passed 00:22:35.537 Test: blockdev copy ...passed 00:22:35.537 00:22:35.537 Run Summary: Type Total Ran Passed Failed Inactive 00:22:35.537 suites 1 1 n/a 0 0 00:22:35.537 tests 23 23 23 0 0 00:22:35.537 asserts 152 152 152 0 n/a 00:22:35.537 00:22:35.537 Elapsed time = 0.172 seconds 00:22:35.795 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:35.795 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.795 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:35.795 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.795 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:35.795 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:35.795 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:35.796 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:36.054 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:36.054 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:36.054 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:36.054 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:36.054 rmmod nvme_tcp 00:22:36.054 rmmod nvme_fabrics 00:22:36.054 rmmod nvme_keyring 00:22:36.054 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:36.054 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:36.054 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:36.054 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 71292 ']' 00:22:36.054 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 71292 00:22:36.054 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 71292 ']' 00:22:36.054 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 71292 00:22:36.054 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:22:36.054 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:36.054 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71292 00:22:36.054 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:22:36.054 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:22:36.054 killing process with pid 71292 00:22:36.054 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71292' 00:22:36.054 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 71292 00:22:36.054 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 71292 00:22:36.620 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:36.620 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:36.620 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:36.620 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:36.620 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:22:36.620 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:36.620 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:22:36.620 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:36.620 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:36.620 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:36.620 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:36.620 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:36.620 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:36.620 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:36.620 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:36.620 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:36.620 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:36.620 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:36.620 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:36.620 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:36.620 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:36.620 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:36.620 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:36.621 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.621 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:36.621 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.621 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:22:36.621 00:22:36.621 real 0m3.762s 00:22:36.621 user 0m11.625s 00:22:36.621 sys 0m1.608s 00:22:36.621 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:36.621 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:36.621 ************************************ 00:22:36.621 END TEST nvmf_bdevio_no_huge 00:22:36.621 ************************************ 00:22:36.879 19:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:36.879 19:24:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:36.879 19:24:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:36.879 19:24:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:36.879 ************************************ 00:22:36.879 START TEST nvmf_tls 00:22:36.879 ************************************ 00:22:36.879 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:36.879 * Looking for test storage... 00:22:36.879 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:36.879 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:36.879 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:22:36.879 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:36.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.879 --rc genhtml_branch_coverage=1 00:22:36.879 --rc genhtml_function_coverage=1 00:22:36.879 --rc genhtml_legend=1 00:22:36.879 --rc geninfo_all_blocks=1 00:22:36.879 --rc geninfo_unexecuted_blocks=1 00:22:36.879 00:22:36.879 ' 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:36.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.879 --rc genhtml_branch_coverage=1 00:22:36.879 --rc genhtml_function_coverage=1 00:22:36.879 --rc genhtml_legend=1 00:22:36.879 --rc geninfo_all_blocks=1 00:22:36.879 --rc geninfo_unexecuted_blocks=1 00:22:36.879 00:22:36.879 ' 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:36.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.879 --rc genhtml_branch_coverage=1 00:22:36.879 --rc genhtml_function_coverage=1 00:22:36.879 --rc genhtml_legend=1 00:22:36.879 --rc geninfo_all_blocks=1 00:22:36.879 --rc geninfo_unexecuted_blocks=1 00:22:36.879 00:22:36.879 ' 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:36.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.879 --rc genhtml_branch_coverage=1 00:22:36.879 --rc genhtml_function_coverage=1 00:22:36.879 --rc genhtml_legend=1 00:22:36.879 --rc geninfo_all_blocks=1 00:22:36.879 --rc geninfo_unexecuted_blocks=1 00:22:36.879 00:22:36.879 ' 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:36.879 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:36.879 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@458 -- # nvmf_veth_init 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:36.880 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:37.138 Cannot find device "nvmf_init_br" 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:37.138 Cannot find device "nvmf_init_br2" 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:37.138 Cannot find device "nvmf_tgt_br" 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:37.138 Cannot find device "nvmf_tgt_br2" 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:37.138 Cannot find device "nvmf_init_br" 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:37.138 Cannot find device "nvmf_init_br2" 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:37.138 Cannot find device "nvmf_tgt_br" 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:37.138 Cannot find device "nvmf_tgt_br2" 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:37.138 Cannot find device "nvmf_br" 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:37.138 Cannot find device "nvmf_init_if" 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:37.138 Cannot find device "nvmf_init_if2" 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:37.138 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:37.138 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:37.138 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:37.411 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:37.411 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.133 ms 00:22:37.411 00:22:37.411 --- 10.0.0.3 ping statistics --- 00:22:37.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.411 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:37.411 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:37.411 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:22:37.411 00:22:37.411 --- 10.0.0.4 ping statistics --- 00:22:37.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.411 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:37.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:37.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:22:37.411 00:22:37.411 --- 10.0.0.1 ping statistics --- 00:22:37.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.411 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:37.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:37.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:22:37.411 00:22:37.411 --- 10.0.0.2 ping statistics --- 00:22:37.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.411 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # return 0 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=71566 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 71566 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71566 ']' 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:37.411 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.411 [2024-10-17 19:24:46.624871] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:22:37.411 [2024-10-17 19:24:46.624969] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:37.677 [2024-10-17 19:24:46.765704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.677 [2024-10-17 19:24:46.861876] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:37.677 [2024-10-17 19:24:46.861951] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:37.677 [2024-10-17 19:24:46.861966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:37.677 [2024-10-17 19:24:46.861976] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:37.677 [2024-10-17 19:24:46.861987] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:37.677 [2024-10-17 19:24:46.862528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.677 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:37.677 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:37.677 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:37.677 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:37.677 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.677 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:37.677 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:37.677 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:37.935 true 00:22:38.195 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:38.195 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:38.453 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:38.453 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:38.453 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:38.712 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:38.712 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:38.969 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:38.969 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:38.969 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:39.227 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:39.227 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:39.486 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:39.486 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:39.486 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:39.486 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:40.052 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:40.052 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:40.052 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:40.310 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:40.310 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:40.568 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:40.568 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:40.568 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:40.827 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:40.827 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:41.395 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:41.395 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:41.395 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:41.395 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:41.395 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:22:41.395 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:22:41.395 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:22:41.395 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:22:41.395 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:22:41.395 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:41.395 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:41.395 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:41.395 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:22:41.395 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:22:41.395 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:22:41.395 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:22:41.395 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:22:41.395 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:41.395 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:41.395 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.ijKYluEEMl 00:22:41.395 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:41.395 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.37VWGfgBmA 00:22:41.395 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:41.395 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:41.395 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.ijKYluEEMl 00:22:41.395 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.37VWGfgBmA 00:22:41.395 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:41.653 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:22:41.911 [2024-10-17 19:24:51.119410] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:42.169 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.ijKYluEEMl 00:22:42.169 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ijKYluEEMl 00:22:42.169 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:42.169 [2024-10-17 19:24:51.422668] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.426 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:42.683 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:22:42.683 [2024-10-17 19:24:51.926780] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:42.683 [2024-10-17 19:24:51.927203] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:42.941 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:42.941 malloc0 00:22:43.198 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:43.455 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ijKYluEEMl 00:22:43.713 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:43.970 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ijKYluEEMl 00:22:56.168 Initializing NVMe Controllers 00:22:56.168 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:22:56.168 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:56.168 Initialization complete. Launching workers. 00:22:56.168 ======================================================== 00:22:56.168 Latency(us) 00:22:56.168 Device Information : IOPS MiB/s Average min max 00:22:56.168 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9102.24 35.56 7031.54 1524.38 14340.59 00:22:56.168 ======================================================== 00:22:56.168 Total : 9102.24 35.56 7031.54 1524.38 14340.59 00:22:56.168 00:22:56.168 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ijKYluEEMl 00:22:56.168 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:56.168 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:56.168 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:56.168 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ijKYluEEMl 00:22:56.168 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:56.168 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71803 00:22:56.168 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:56.168 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:56.168 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71803 /var/tmp/bdevperf.sock 00:22:56.168 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71803 ']' 00:22:56.168 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:56.168 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:56.168 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:56.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:56.168 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:56.168 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.168 [2024-10-17 19:25:03.297464] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:22:56.168 [2024-10-17 19:25:03.297863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71803 ] 00:22:56.168 [2024-10-17 19:25:03.439406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.168 [2024-10-17 19:25:03.522066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.168 [2024-10-17 19:25:03.598113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:56.168 19:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:56.168 19:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:56.168 19:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ijKYluEEMl 00:22:56.168 19:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:56.168 [2024-10-17 19:25:04.861621] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:56.168 TLSTESTn1 00:22:56.168 19:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:56.168 Running I/O for 10 seconds... 00:22:58.030 4109.00 IOPS, 16.05 MiB/s [2024-10-17T19:25:08.219Z] 4131.00 IOPS, 16.14 MiB/s [2024-10-17T19:25:09.153Z] 4137.00 IOPS, 16.16 MiB/s [2024-10-17T19:25:10.086Z] 4141.50 IOPS, 16.18 MiB/s [2024-10-17T19:25:11.459Z] 4141.60 IOPS, 16.18 MiB/s [2024-10-17T19:25:12.393Z] 4142.83 IOPS, 16.18 MiB/s [2024-10-17T19:25:13.327Z] 4142.29 IOPS, 16.18 MiB/s [2024-10-17T19:25:14.262Z] 4144.00 IOPS, 16.19 MiB/s [2024-10-17T19:25:15.199Z] 4134.67 IOPS, 16.15 MiB/s [2024-10-17T19:25:15.199Z] 4134.30 IOPS, 16.15 MiB/s 00:23:05.941 Latency(us) 00:23:05.941 [2024-10-17T19:25:15.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.941 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:05.941 Verification LBA range: start 0x0 length 0x2000 00:23:05.941 TLSTESTn1 : 10.02 4139.55 16.17 0.00 0.00 30862.79 6047.19 25737.77 00:23:05.941 [2024-10-17T19:25:15.199Z] =================================================================================================================== 00:23:05.941 [2024-10-17T19:25:15.199Z] Total : 4139.55 16.17 0.00 0.00 30862.79 6047.19 25737.77 00:23:05.941 { 00:23:05.941 "results": [ 00:23:05.941 { 00:23:05.941 "job": "TLSTESTn1", 00:23:05.941 "core_mask": "0x4", 00:23:05.941 "workload": "verify", 00:23:05.941 "status": "finished", 00:23:05.941 "verify_range": { 00:23:05.941 "start": 0, 00:23:05.941 "length": 8192 00:23:05.941 }, 00:23:05.941 "queue_depth": 128, 00:23:05.941 "io_size": 4096, 00:23:05.941 "runtime": 10.017282, 00:23:05.941 "iops": 4139.546036539652, 00:23:05.941 "mibps": 16.170101705233016, 00:23:05.941 "io_failed": 0, 00:23:05.941 "io_timeout": 0, 00:23:05.941 "avg_latency_us": 30862.793422151677, 00:23:05.941 "min_latency_us": 6047.185454545454, 00:23:05.941 "max_latency_us": 25737.774545454544 00:23:05.941 } 00:23:05.941 ], 00:23:05.941 "core_count": 1 00:23:05.941 } 00:23:05.941 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:05.941 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71803 00:23:05.941 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71803 ']' 00:23:05.941 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71803 00:23:05.941 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:05.941 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:05.941 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71803 00:23:05.941 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:05.941 killing process with pid 71803 00:23:05.941 Received shutdown signal, test time was about 10.000000 seconds 00:23:05.941 00:23:05.941 Latency(us) 00:23:05.941 [2024-10-17T19:25:15.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.941 [2024-10-17T19:25:15.199Z] =================================================================================================================== 00:23:05.941 [2024-10-17T19:25:15.199Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:05.941 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:05.941 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71803' 00:23:05.941 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71803 00:23:05.941 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71803 00:23:06.199 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.37VWGfgBmA 00:23:06.199 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:06.199 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.37VWGfgBmA 00:23:06.199 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:06.199 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:06.199 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:06.199 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:06.199 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.37VWGfgBmA 00:23:06.200 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:06.200 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:06.200 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:06.200 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.37VWGfgBmA 00:23:06.200 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:06.200 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71940 00:23:06.200 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:06.200 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:06.200 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71940 /var/tmp/bdevperf.sock 00:23:06.200 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71940 ']' 00:23:06.200 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:06.200 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:06.200 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:06.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:06.200 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:06.200 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.457 [2024-10-17 19:25:15.470558] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:23:06.457 [2024-10-17 19:25:15.470891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71940 ] 00:23:06.457 [2024-10-17 19:25:15.611501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.457 [2024-10-17 19:25:15.689156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:06.716 [2024-10-17 19:25:15.761818] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:07.283 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:07.283 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:07.283 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.37VWGfgBmA 00:23:07.849 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:08.107 [2024-10-17 19:25:17.206939] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:08.107 [2024-10-17 19:25:17.212050] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:08.107 [2024-10-17 19:25:17.212661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1518090 (107): Transport endpoint is not connected 00:23:08.107 [2024-10-17 19:25:17.213648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1518090 (9): Bad file descriptor 00:23:08.107 [2024-10-17 19:25:17.214644] nvme_ctrlr.c:4250:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:08.107 [2024-10-17 19:25:17.216557] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:23:08.107 [2024-10-17 19:25:17.216621] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:08.107 [2024-10-17 19:25:17.216658] nvme_ctrlr.c:1152:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:08.107 request: 00:23:08.107 { 00:23:08.107 "name": "TLSTEST", 00:23:08.107 "trtype": "tcp", 00:23:08.107 "traddr": "10.0.0.3", 00:23:08.107 "adrfam": "ipv4", 00:23:08.107 "trsvcid": "4420", 00:23:08.107 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:08.108 "prchk_reftag": false, 00:23:08.108 "prchk_guard": false, 00:23:08.108 "hdgst": false, 00:23:08.108 "ddgst": false, 00:23:08.108 "psk": "key0", 00:23:08.108 "allow_unrecognized_csi": false, 00:23:08.108 "method": "bdev_nvme_attach_controller", 00:23:08.108 "req_id": 1 00:23:08.108 } 00:23:08.108 Got JSON-RPC error response 00:23:08.108 response: 00:23:08.108 { 00:23:08.108 "code": -5, 00:23:08.108 "message": "Input/output error" 00:23:08.108 } 00:23:08.108 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71940 00:23:08.108 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71940 ']' 00:23:08.108 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71940 00:23:08.108 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:08.108 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:08.108 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71940 00:23:08.108 killing process with pid 71940 00:23:08.108 Received shutdown signal, test time was about 10.000000 seconds 00:23:08.108 00:23:08.108 Latency(us) 00:23:08.108 [2024-10-17T19:25:17.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.108 [2024-10-17T19:25:17.366Z] =================================================================================================================== 00:23:08.108 [2024-10-17T19:25:17.366Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:08.108 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:08.108 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:08.108 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71940' 00:23:08.108 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71940 00:23:08.108 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71940 00:23:08.365 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:08.365 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:08.365 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:08.365 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:08.365 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:08.365 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ijKYluEEMl 00:23:08.365 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:08.365 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ijKYluEEMl 00:23:08.365 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:08.365 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:08.365 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:08.366 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:08.366 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ijKYluEEMl 00:23:08.366 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:08.366 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:08.366 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:08.366 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ijKYluEEMl 00:23:08.366 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:08.366 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71977 00:23:08.366 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:08.366 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:08.366 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71977 /var/tmp/bdevperf.sock 00:23:08.366 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71977 ']' 00:23:08.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.366 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.366 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:08.366 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.366 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:08.366 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.366 [2024-10-17 19:25:17.612468] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:23:08.366 [2024-10-17 19:25:17.612586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71977 ] 00:23:08.624 [2024-10-17 19:25:17.751845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.624 [2024-10-17 19:25:17.813153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.624 [2024-10-17 19:25:17.867204] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:09.585 19:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:09.585 19:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:09.585 19:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ijKYluEEMl 00:23:09.843 19:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:10.103 [2024-10-17 19:25:19.161996] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:10.103 [2024-10-17 19:25:19.172355] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:10.103 [2024-10-17 19:25:19.172631] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:10.103 [2024-10-17 19:25:19.172712] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:10.103 [2024-10-17 19:25:19.172934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b17090 (107): Transport endpoint is not connected 00:23:10.103 [2024-10-17 19:25:19.173929] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b17090 (9): Bad file descriptor 00:23:10.103 [2024-10-17 19:25:19.174922] nvme_ctrlr.c:4250:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.103 [2024-10-17 19:25:19.174945] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:23:10.103 [2024-10-17 19:25:19.174956] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:10.103 [2024-10-17 19:25:19.174968] nvme_ctrlr.c:1152:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.103 request: 00:23:10.103 { 00:23:10.103 "name": "TLSTEST", 00:23:10.103 "trtype": "tcp", 00:23:10.103 "traddr": "10.0.0.3", 00:23:10.103 "adrfam": "ipv4", 00:23:10.103 "trsvcid": "4420", 00:23:10.103 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.103 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:10.103 "prchk_reftag": false, 00:23:10.103 "prchk_guard": false, 00:23:10.103 "hdgst": false, 00:23:10.103 "ddgst": false, 00:23:10.103 "psk": "key0", 00:23:10.103 "allow_unrecognized_csi": false, 00:23:10.103 "method": "bdev_nvme_attach_controller", 00:23:10.103 "req_id": 1 00:23:10.103 } 00:23:10.103 Got JSON-RPC error response 00:23:10.103 response: 00:23:10.103 { 00:23:10.103 "code": -5, 00:23:10.103 "message": "Input/output error" 00:23:10.103 } 00:23:10.103 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71977 00:23:10.103 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71977 ']' 00:23:10.103 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71977 00:23:10.103 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:10.103 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:10.103 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71977 00:23:10.103 killing process with pid 71977 00:23:10.103 Received shutdown signal, test time was about 10.000000 seconds 00:23:10.103 00:23:10.103 Latency(us) 00:23:10.103 [2024-10-17T19:25:19.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.103 [2024-10-17T19:25:19.361Z] =================================================================================================================== 00:23:10.103 [2024-10-17T19:25:19.361Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:10.103 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:10.103 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:10.103 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71977' 00:23:10.103 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71977 00:23:10.103 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71977 00:23:10.361 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:10.361 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:10.361 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:10.361 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:10.361 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:10.361 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ijKYluEEMl 00:23:10.361 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:10.361 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ijKYluEEMl 00:23:10.361 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:10.361 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:10.361 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:10.361 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:10.361 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ijKYluEEMl 00:23:10.361 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:10.361 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:10.361 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:10.361 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ijKYluEEMl 00:23:10.361 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:10.361 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72005 00:23:10.361 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:10.361 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72005 /var/tmp/bdevperf.sock 00:23:10.361 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:10.361 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72005 ']' 00:23:10.361 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.361 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:10.361 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.361 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:10.361 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.361 [2024-10-17 19:25:19.487738] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:23:10.361 [2024-10-17 19:25:19.488075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72005 ] 00:23:10.620 [2024-10-17 19:25:19.626998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.620 [2024-10-17 19:25:19.681591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.620 [2024-10-17 19:25:19.734939] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:11.555 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:11.555 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:11.555 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ijKYluEEMl 00:23:11.555 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:11.813 [2024-10-17 19:25:21.018262] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:11.813 [2024-10-17 19:25:21.023435] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:11.813 [2024-10-17 19:25:21.023492] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:11.813 [2024-10-17 19:25:21.023574] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:11.813 [2024-10-17 19:25:21.024148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a24090 (107): Transport endpoint is not connected 00:23:11.813 [2024-10-17 19:25:21.025121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a24090 (9): Bad file descriptor 00:23:11.813 [2024-10-17 19:25:21.026118] nvme_ctrlr.c:4250:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:11.813 [2024-10-17 19:25:21.026155] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:23:11.813 [2024-10-17 19:25:21.026168] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:11.813 [2024-10-17 19:25:21.026179] nvme_ctrlr.c:1152:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:11.813 request: 00:23:11.813 { 00:23:11.813 "name": "TLSTEST", 00:23:11.813 "trtype": "tcp", 00:23:11.813 "traddr": "10.0.0.3", 00:23:11.813 "adrfam": "ipv4", 00:23:11.813 "trsvcid": "4420", 00:23:11.813 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:11.813 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:11.813 "prchk_reftag": false, 00:23:11.813 "prchk_guard": false, 00:23:11.813 "hdgst": false, 00:23:11.813 "ddgst": false, 00:23:11.813 "psk": "key0", 00:23:11.813 "allow_unrecognized_csi": false, 00:23:11.813 "method": "bdev_nvme_attach_controller", 00:23:11.813 "req_id": 1 00:23:11.813 } 00:23:11.813 Got JSON-RPC error response 00:23:11.813 response: 00:23:11.813 { 00:23:11.813 "code": -5, 00:23:11.813 "message": "Input/output error" 00:23:11.813 } 00:23:11.813 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72005 00:23:11.813 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72005 ']' 00:23:11.813 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72005 00:23:11.813 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:11.813 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:11.813 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72005 00:23:12.072 killing process with pid 72005 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72005' 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72005 00:23:12.072 Received shutdown signal, test time was about 10.000000 seconds 00:23:12.072 00:23:12.072 Latency(us) 00:23:12.072 [2024-10-17T19:25:21.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.072 [2024-10-17T19:25:21.330Z] =================================================================================================================== 00:23:12.072 [2024-10-17T19:25:21.330Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72005 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72034 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72034 /var/tmp/bdevperf.sock 00:23:12.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72034 ']' 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:12.072 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.331 [2024-10-17 19:25:21.334534] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:23:12.331 [2024-10-17 19:25:21.334668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72034 ] 00:23:12.331 [2024-10-17 19:25:21.466560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.331 [2024-10-17 19:25:21.518523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.331 [2024-10-17 19:25:21.571254] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:12.589 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:12.589 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:12.589 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:12.847 [2024-10-17 19:25:21.929308] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:12.847 [2024-10-17 19:25:21.929379] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:12.847 request: 00:23:12.847 { 00:23:12.847 "name": "key0", 00:23:12.847 "path": "", 00:23:12.847 "method": "keyring_file_add_key", 00:23:12.847 "req_id": 1 00:23:12.847 } 00:23:12.847 Got JSON-RPC error response 00:23:12.847 response: 00:23:12.847 { 00:23:12.847 "code": -1, 00:23:12.847 "message": "Operation not permitted" 00:23:12.847 } 00:23:12.847 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:13.105 [2024-10-17 19:25:22.229531] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:13.105 [2024-10-17 19:25:22.229633] bdev_nvme.c:6498:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:13.105 request: 00:23:13.105 { 00:23:13.105 "name": "TLSTEST", 00:23:13.105 "trtype": "tcp", 00:23:13.105 "traddr": "10.0.0.3", 00:23:13.105 "adrfam": "ipv4", 00:23:13.105 "trsvcid": "4420", 00:23:13.105 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.105 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:13.105 "prchk_reftag": false, 00:23:13.105 "prchk_guard": false, 00:23:13.105 "hdgst": false, 00:23:13.105 "ddgst": false, 00:23:13.105 "psk": "key0", 00:23:13.105 "allow_unrecognized_csi": false, 00:23:13.105 "method": "bdev_nvme_attach_controller", 00:23:13.105 "req_id": 1 00:23:13.105 } 00:23:13.105 Got JSON-RPC error response 00:23:13.105 response: 00:23:13.105 { 00:23:13.105 "code": -126, 00:23:13.105 "message": "Required key not available" 00:23:13.105 } 00:23:13.105 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72034 00:23:13.105 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72034 ']' 00:23:13.105 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72034 00:23:13.105 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:13.105 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:13.105 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72034 00:23:13.105 killing process with pid 72034 00:23:13.105 Received shutdown signal, test time was about 10.000000 seconds 00:23:13.105 00:23:13.105 Latency(us) 00:23:13.105 [2024-10-17T19:25:22.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.105 [2024-10-17T19:25:22.363Z] =================================================================================================================== 00:23:13.105 [2024-10-17T19:25:22.363Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:13.105 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:13.105 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:13.105 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72034' 00:23:13.105 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72034 00:23:13.105 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72034 00:23:13.365 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:13.365 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:13.365 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:13.365 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:13.365 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:13.365 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71566 00:23:13.365 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71566 ']' 00:23:13.365 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71566 00:23:13.365 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:13.365 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:13.365 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71566 00:23:13.365 killing process with pid 71566 00:23:13.365 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:13.365 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:13.365 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71566' 00:23:13.365 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71566 00:23:13.365 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71566 00:23:13.623 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:13.623 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:13.623 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:23:13.623 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:23:13.623 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:13.623 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:23:13.623 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:23:13.623 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:13.623 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:13.623 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.ihBASWRRuq 00:23:13.623 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:13.623 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.ihBASWRRuq 00:23:13.623 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:13.623 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:13.623 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:13.623 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.623 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72076 00:23:13.623 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:13.623 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72076 00:23:13.623 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72076 ']' 00:23:13.623 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.623 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:13.623 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.623 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:13.623 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.881 [2024-10-17 19:25:22.918058] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:23:13.881 [2024-10-17 19:25:22.918180] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.881 [2024-10-17 19:25:23.053598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.881 [2024-10-17 19:25:23.124247] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.881 [2024-10-17 19:25:23.124525] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.881 [2024-10-17 19:25:23.124665] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.881 [2024-10-17 19:25:23.124719] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.881 [2024-10-17 19:25:23.124748] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.881 [2024-10-17 19:25:23.125307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.139 [2024-10-17 19:25:23.199621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:14.139 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:14.139 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:14.139 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:14.139 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:14.139 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.139 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.139 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.ihBASWRRuq 00:23:14.139 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ihBASWRRuq 00:23:14.139 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:14.397 [2024-10-17 19:25:23.612024] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.397 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:14.656 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:23:14.930 [2024-10-17 19:25:24.144192] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:14.930 [2024-10-17 19:25:24.144474] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:14.930 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:15.191 malloc0 00:23:15.191 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:15.757 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ihBASWRRuq 00:23:15.757 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:16.016 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ihBASWRRuq 00:23:16.016 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:16.016 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:16.016 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:16.016 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ihBASWRRuq 00:23:16.016 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:16.016 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72124 00:23:16.016 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:16.016 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:16.016 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72124 /var/tmp/bdevperf.sock 00:23:16.016 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72124 ']' 00:23:16.016 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.016 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:16.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.016 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.016 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:16.016 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.274 [2024-10-17 19:25:25.315427] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:23:16.275 [2024-10-17 19:25:25.315573] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72124 ] 00:23:16.275 [2024-10-17 19:25:25.456178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.275 [2024-10-17 19:25:25.520477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.532 [2024-10-17 19:25:25.577830] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:17.097 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:17.097 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:17.097 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ihBASWRRuq 00:23:17.688 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:17.688 [2024-10-17 19:25:26.881184] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:17.945 TLSTESTn1 00:23:17.945 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:17.945 Running I/O for 10 seconds... 00:23:20.252 3984.00 IOPS, 15.56 MiB/s [2024-10-17T19:25:30.450Z] 4032.00 IOPS, 15.75 MiB/s [2024-10-17T19:25:31.384Z] 4024.67 IOPS, 15.72 MiB/s [2024-10-17T19:25:32.318Z] 4005.25 IOPS, 15.65 MiB/s [2024-10-17T19:25:33.252Z] 3972.80 IOPS, 15.52 MiB/s [2024-10-17T19:25:34.184Z] 3951.33 IOPS, 15.43 MiB/s [2024-10-17T19:25:35.116Z] 3891.29 IOPS, 15.20 MiB/s [2024-10-17T19:25:36.496Z] 3884.12 IOPS, 15.17 MiB/s [2024-10-17T19:25:37.428Z] 3872.11 IOPS, 15.13 MiB/s [2024-10-17T19:25:37.428Z] 3863.80 IOPS, 15.09 MiB/s 00:23:28.170 Latency(us) 00:23:28.170 [2024-10-17T19:25:37.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.170 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:28.170 Verification LBA range: start 0x0 length 0x2000 00:23:28.170 TLSTESTn1 : 10.02 3869.91 15.12 0.00 0.00 33023.02 5868.45 36223.53 00:23:28.170 [2024-10-17T19:25:37.428Z] =================================================================================================================== 00:23:28.170 [2024-10-17T19:25:37.428Z] Total : 3869.91 15.12 0.00 0.00 33023.02 5868.45 36223.53 00:23:28.170 { 00:23:28.170 "results": [ 00:23:28.170 { 00:23:28.170 "job": "TLSTESTn1", 00:23:28.170 "core_mask": "0x4", 00:23:28.170 "workload": "verify", 00:23:28.170 "status": "finished", 00:23:28.170 "verify_range": { 00:23:28.170 "start": 0, 00:23:28.170 "length": 8192 00:23:28.170 }, 00:23:28.170 "queue_depth": 128, 00:23:28.170 "io_size": 4096, 00:23:28.170 "runtime": 10.016768, 00:23:28.170 "iops": 3869.9109333469637, 00:23:28.170 "mibps": 15.116839583386577, 00:23:28.170 "io_failed": 0, 00:23:28.170 "io_timeout": 0, 00:23:28.170 "avg_latency_us": 33023.017795330255, 00:23:28.170 "min_latency_us": 5868.450909090909, 00:23:28.171 "max_latency_us": 36223.534545454546 00:23:28.171 } 00:23:28.171 ], 00:23:28.171 "core_count": 1 00:23:28.171 } 00:23:28.171 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:28.171 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 72124 00:23:28.171 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72124 ']' 00:23:28.171 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72124 00:23:28.171 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:28.171 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:28.171 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72124 00:23:28.171 killing process with pid 72124 00:23:28.171 Received shutdown signal, test time was about 10.000000 seconds 00:23:28.171 00:23:28.171 Latency(us) 00:23:28.171 [2024-10-17T19:25:37.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.171 [2024-10-17T19:25:37.429Z] =================================================================================================================== 00:23:28.171 [2024-10-17T19:25:37.429Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:28.171 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:28.171 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:28.171 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72124' 00:23:28.171 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72124 00:23:28.171 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72124 00:23:28.428 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.ihBASWRRuq 00:23:28.428 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ihBASWRRuq 00:23:28.428 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:28.428 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ihBASWRRuq 00:23:28.428 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:28.428 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:28.428 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:28.428 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:28.428 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ihBASWRRuq 00:23:28.428 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:28.428 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:28.428 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:28.428 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ihBASWRRuq 00:23:28.428 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:28.428 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72261 00:23:28.428 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:28.428 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:28.428 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72261 /var/tmp/bdevperf.sock 00:23:28.428 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72261 ']' 00:23:28.428 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:28.428 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:28.428 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:28.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:28.429 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:28.429 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.429 [2024-10-17 19:25:37.537118] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:23:28.429 [2024-10-17 19:25:37.537258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72261 ] 00:23:28.429 [2024-10-17 19:25:37.673665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.686 [2024-10-17 19:25:37.751135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.686 [2024-10-17 19:25:37.832396] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:28.686 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:28.686 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:28.686 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ihBASWRRuq 00:23:29.254 [2024-10-17 19:25:38.238489] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ihBASWRRuq': 0100666 00:23:29.254 [2024-10-17 19:25:38.238894] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:29.254 request: 00:23:29.254 { 00:23:29.254 "name": "key0", 00:23:29.254 "path": "/tmp/tmp.ihBASWRRuq", 00:23:29.254 "method": "keyring_file_add_key", 00:23:29.254 "req_id": 1 00:23:29.254 } 00:23:29.254 Got JSON-RPC error response 00:23:29.254 response: 00:23:29.254 { 00:23:29.254 "code": -1, 00:23:29.254 "message": "Operation not permitted" 00:23:29.254 } 00:23:29.254 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:29.616 [2024-10-17 19:25:38.518717] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:29.616 [2024-10-17 19:25:38.519109] bdev_nvme.c:6498:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:29.616 request: 00:23:29.616 { 00:23:29.616 "name": "TLSTEST", 00:23:29.616 "trtype": "tcp", 00:23:29.616 "traddr": "10.0.0.3", 00:23:29.616 "adrfam": "ipv4", 00:23:29.616 "trsvcid": "4420", 00:23:29.616 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.616 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.616 "prchk_reftag": false, 00:23:29.616 "prchk_guard": false, 00:23:29.616 "hdgst": false, 00:23:29.616 "ddgst": false, 00:23:29.616 "psk": "key0", 00:23:29.616 "allow_unrecognized_csi": false, 00:23:29.616 "method": "bdev_nvme_attach_controller", 00:23:29.616 "req_id": 1 00:23:29.616 } 00:23:29.616 Got JSON-RPC error response 00:23:29.616 response: 00:23:29.616 { 00:23:29.616 "code": -126, 00:23:29.616 "message": "Required key not available" 00:23:29.616 } 00:23:29.616 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72261 00:23:29.616 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72261 ']' 00:23:29.616 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72261 00:23:29.616 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:29.616 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:29.616 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72261 00:23:29.616 killing process with pid 72261 00:23:29.616 Received shutdown signal, test time was about 10.000000 seconds 00:23:29.616 00:23:29.616 Latency(us) 00:23:29.616 [2024-10-17T19:25:38.874Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.616 [2024-10-17T19:25:38.874Z] =================================================================================================================== 00:23:29.616 [2024-10-17T19:25:38.874Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:29.616 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:29.616 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:29.616 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72261' 00:23:29.616 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72261 00:23:29.616 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72261 00:23:29.616 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:29.616 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:29.616 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:29.616 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:29.616 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:29.616 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 72076 00:23:29.616 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72076 ']' 00:23:29.616 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72076 00:23:29.616 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:29.616 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:29.616 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72076 00:23:29.886 killing process with pid 72076 00:23:29.886 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:29.886 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:29.886 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72076' 00:23:29.886 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72076 00:23:29.886 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72076 00:23:30.143 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:30.143 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:30.143 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:30.143 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.143 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72298 00:23:30.144 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:30.144 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72298 00:23:30.144 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72298 ']' 00:23:30.144 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.144 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:30.144 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.144 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:30.144 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.144 [2024-10-17 19:25:39.265285] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:23:30.144 [2024-10-17 19:25:39.265758] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.401 [2024-10-17 19:25:39.407607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.401 [2024-10-17 19:25:39.486438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.401 [2024-10-17 19:25:39.486752] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.401 [2024-10-17 19:25:39.486772] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.401 [2024-10-17 19:25:39.486782] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.401 [2024-10-17 19:25:39.486789] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.401 [2024-10-17 19:25:39.487260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.401 [2024-10-17 19:25:39.562060] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:31.334 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:31.334 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:31.334 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:31.334 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:31.334 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.334 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.334 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.ihBASWRRuq 00:23:31.334 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:31.334 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ihBASWRRuq 00:23:31.334 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:23:31.334 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:31.334 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:23:31.334 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:31.334 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.ihBASWRRuq 00:23:31.334 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ihBASWRRuq 00:23:31.334 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:31.592 [2024-10-17 19:25:40.675510] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.592 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:31.849 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:23:32.106 [2024-10-17 19:25:41.251692] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:32.106 [2024-10-17 19:25:41.251945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:32.106 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:32.364 malloc0 00:23:32.364 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:32.930 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ihBASWRRuq 00:23:33.188 [2024-10-17 19:25:42.252649] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ihBASWRRuq': 0100666 00:23:33.188 [2024-10-17 19:25:42.252926] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:33.188 request: 00:23:33.188 { 00:23:33.188 "name": "key0", 00:23:33.188 "path": "/tmp/tmp.ihBASWRRuq", 00:23:33.188 "method": "keyring_file_add_key", 00:23:33.188 "req_id": 1 00:23:33.188 } 00:23:33.188 Got JSON-RPC error response 00:23:33.188 response: 00:23:33.188 { 00:23:33.188 "code": -1, 00:23:33.188 "message": "Operation not permitted" 00:23:33.188 } 00:23:33.188 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:33.446 [2024-10-17 19:25:42.528829] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:33.446 [2024-10-17 19:25:42.528950] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:33.446 request: 00:23:33.446 { 00:23:33.446 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.446 "host": "nqn.2016-06.io.spdk:host1", 00:23:33.446 "psk": "key0", 00:23:33.446 "method": "nvmf_subsystem_add_host", 00:23:33.446 "req_id": 1 00:23:33.446 } 00:23:33.446 Got JSON-RPC error response 00:23:33.446 response: 00:23:33.446 { 00:23:33.446 "code": -32603, 00:23:33.446 "message": "Internal error" 00:23:33.446 } 00:23:33.446 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:33.446 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:33.446 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:33.446 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:33.446 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 72298 00:23:33.446 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72298 ']' 00:23:33.446 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72298 00:23:33.446 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:33.446 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:33.446 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72298 00:23:33.446 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:33.446 killing process with pid 72298 00:23:33.446 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:33.446 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72298' 00:23:33.446 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72298 00:23:33.447 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72298 00:23:33.704 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.ihBASWRRuq 00:23:33.704 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:33.704 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:33.704 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:33.704 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.704 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72367 00:23:33.704 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72367 00:23:33.704 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72367 ']' 00:23:33.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.704 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.704 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:33.704 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:33.704 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.704 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:33.704 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.962 [2024-10-17 19:25:42.967734] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:23:33.962 [2024-10-17 19:25:42.967852] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.962 [2024-10-17 19:25:43.108918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.962 [2024-10-17 19:25:43.186090] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.962 [2024-10-17 19:25:43.186171] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.962 [2024-10-17 19:25:43.186184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.962 [2024-10-17 19:25:43.186192] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.962 [2024-10-17 19:25:43.186200] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.962 [2024-10-17 19:25:43.186668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.220 [2024-10-17 19:25:43.264065] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:34.220 19:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:34.220 19:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:34.220 19:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:34.220 19:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:34.220 19:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.220 19:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.220 19:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.ihBASWRRuq 00:23:34.220 19:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ihBASWRRuq 00:23:34.220 19:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:34.479 [2024-10-17 19:25:43.696014] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.479 19:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:35.045 19:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:23:35.045 [2024-10-17 19:25:44.244171] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:35.045 [2024-10-17 19:25:44.244657] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:35.045 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:35.609 malloc0 00:23:35.609 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:35.609 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ihBASWRRuq 00:23:35.867 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:36.450 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:36.450 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72421 00:23:36.450 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:36.450 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72421 /var/tmp/bdevperf.sock 00:23:36.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:36.450 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72421 ']' 00:23:36.450 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:36.450 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:36.450 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:36.450 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:36.450 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.450 [2024-10-17 19:25:45.438263] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:23:36.450 [2024-10-17 19:25:45.438749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72421 ] 00:23:36.450 [2024-10-17 19:25:45.584461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.450 [2024-10-17 19:25:45.649251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:36.718 [2024-10-17 19:25:45.704663] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:37.284 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:37.284 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:37.284 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ihBASWRRuq 00:23:37.850 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:37.850 [2024-10-17 19:25:47.062026] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:38.107 TLSTESTn1 00:23:38.107 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:23:38.366 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:38.366 "subsystems": [ 00:23:38.366 { 00:23:38.366 "subsystem": "keyring", 00:23:38.366 "config": [ 00:23:38.366 { 00:23:38.366 "method": "keyring_file_add_key", 00:23:38.366 "params": { 00:23:38.366 "name": "key0", 00:23:38.366 "path": "/tmp/tmp.ihBASWRRuq" 00:23:38.366 } 00:23:38.366 } 00:23:38.366 ] 00:23:38.366 }, 00:23:38.366 { 00:23:38.366 "subsystem": "iobuf", 00:23:38.366 "config": [ 00:23:38.366 { 00:23:38.366 "method": "iobuf_set_options", 00:23:38.366 "params": { 00:23:38.366 "small_pool_count": 8192, 00:23:38.366 "large_pool_count": 1024, 00:23:38.366 "small_bufsize": 8192, 00:23:38.366 "large_bufsize": 135168 00:23:38.366 } 00:23:38.366 } 00:23:38.366 ] 00:23:38.366 }, 00:23:38.366 { 00:23:38.366 "subsystem": "sock", 00:23:38.366 "config": [ 00:23:38.366 { 00:23:38.366 "method": "sock_set_default_impl", 00:23:38.366 "params": { 00:23:38.366 "impl_name": "uring" 00:23:38.366 } 00:23:38.366 }, 00:23:38.366 { 00:23:38.366 "method": "sock_impl_set_options", 00:23:38.366 "params": { 00:23:38.366 "impl_name": "ssl", 00:23:38.366 "recv_buf_size": 4096, 00:23:38.366 "send_buf_size": 4096, 00:23:38.366 "enable_recv_pipe": true, 00:23:38.366 "enable_quickack": false, 00:23:38.366 "enable_placement_id": 0, 00:23:38.366 "enable_zerocopy_send_server": true, 00:23:38.366 "enable_zerocopy_send_client": false, 00:23:38.366 "zerocopy_threshold": 0, 00:23:38.366 "tls_version": 0, 00:23:38.366 "enable_ktls": false 00:23:38.366 } 00:23:38.366 }, 00:23:38.366 { 00:23:38.366 "method": "sock_impl_set_options", 00:23:38.366 "params": { 00:23:38.366 "impl_name": "posix", 00:23:38.366 "recv_buf_size": 2097152, 00:23:38.366 "send_buf_size": 2097152, 00:23:38.366 "enable_recv_pipe": true, 00:23:38.366 "enable_quickack": false, 00:23:38.366 "enable_placement_id": 0, 00:23:38.366 "enable_zerocopy_send_server": true, 00:23:38.366 "enable_zerocopy_send_client": false, 00:23:38.366 "zerocopy_threshold": 0, 00:23:38.366 "tls_version": 0, 00:23:38.366 "enable_ktls": false 00:23:38.366 } 00:23:38.366 }, 00:23:38.366 { 00:23:38.366 "method": "sock_impl_set_options", 00:23:38.366 "params": { 00:23:38.366 "impl_name": "uring", 00:23:38.366 "recv_buf_size": 2097152, 00:23:38.366 "send_buf_size": 2097152, 00:23:38.366 "enable_recv_pipe": true, 00:23:38.366 "enable_quickack": false, 00:23:38.366 "enable_placement_id": 0, 00:23:38.366 "enable_zerocopy_send_server": false, 00:23:38.366 "enable_zerocopy_send_client": false, 00:23:38.366 "zerocopy_threshold": 0, 00:23:38.366 "tls_version": 0, 00:23:38.366 "enable_ktls": false 00:23:38.366 } 00:23:38.366 } 00:23:38.366 ] 00:23:38.366 }, 00:23:38.366 { 00:23:38.366 "subsystem": "vmd", 00:23:38.366 "config": [] 00:23:38.366 }, 00:23:38.366 { 00:23:38.366 "subsystem": "accel", 00:23:38.366 "config": [ 00:23:38.366 { 00:23:38.366 "method": "accel_set_options", 00:23:38.366 "params": { 00:23:38.366 "small_cache_size": 128, 00:23:38.366 "large_cache_size": 16, 00:23:38.366 "task_count": 2048, 00:23:38.366 "sequence_count": 2048, 00:23:38.366 "buf_count": 2048 00:23:38.366 } 00:23:38.366 } 00:23:38.366 ] 00:23:38.366 }, 00:23:38.366 { 00:23:38.366 "subsystem": "bdev", 00:23:38.366 "config": [ 00:23:38.366 { 00:23:38.366 "method": "bdev_set_options", 00:23:38.366 "params": { 00:23:38.366 "bdev_io_pool_size": 65535, 00:23:38.366 "bdev_io_cache_size": 256, 00:23:38.366 "bdev_auto_examine": true, 00:23:38.366 "iobuf_small_cache_size": 128, 00:23:38.366 "iobuf_large_cache_size": 16 00:23:38.366 } 00:23:38.366 }, 00:23:38.366 { 00:23:38.366 "method": "bdev_raid_set_options", 00:23:38.366 "params": { 00:23:38.366 "process_window_size_kb": 1024, 00:23:38.366 "process_max_bandwidth_mb_sec": 0 00:23:38.366 } 00:23:38.366 }, 00:23:38.366 { 00:23:38.366 "method": "bdev_iscsi_set_options", 00:23:38.366 "params": { 00:23:38.366 "timeout_sec": 30 00:23:38.366 } 00:23:38.366 }, 00:23:38.366 { 00:23:38.366 "method": "bdev_nvme_set_options", 00:23:38.366 "params": { 00:23:38.366 "action_on_timeout": "none", 00:23:38.366 "timeout_us": 0, 00:23:38.366 "timeout_admin_us": 0, 00:23:38.366 "keep_alive_timeout_ms": 10000, 00:23:38.366 "arbitration_burst": 0, 00:23:38.366 "low_priority_weight": 0, 00:23:38.366 "medium_priority_weight": 0, 00:23:38.366 "high_priority_weight": 0, 00:23:38.366 "nvme_adminq_poll_period_us": 10000, 00:23:38.366 "nvme_ioq_poll_period_us": 0, 00:23:38.366 "io_queue_requests": 0, 00:23:38.366 "delay_cmd_submit": true, 00:23:38.366 "transport_retry_count": 4, 00:23:38.366 "bdev_retry_count": 3, 00:23:38.366 "transport_ack_timeout": 0, 00:23:38.366 "ctrlr_loss_timeout_sec": 0, 00:23:38.366 "reconnect_delay_sec": 0, 00:23:38.366 "fast_io_fail_timeout_sec": 0, 00:23:38.366 "disable_auto_failback": false, 00:23:38.366 "generate_uuids": false, 00:23:38.366 "transport_tos": 0, 00:23:38.366 "nvme_error_stat": false, 00:23:38.366 "rdma_srq_size": 0, 00:23:38.366 "io_path_stat": false, 00:23:38.366 "allow_accel_sequence": false, 00:23:38.366 "rdma_max_cq_size": 0, 00:23:38.366 "rdma_cm_event_timeout_ms": 0, 00:23:38.366 "dhchap_digests": [ 00:23:38.366 "sha256", 00:23:38.366 "sha384", 00:23:38.366 "sha512" 00:23:38.366 ], 00:23:38.366 "dhchap_dhgroups": [ 00:23:38.366 "null", 00:23:38.366 "ffdhe2048", 00:23:38.366 "ffdhe3072", 00:23:38.366 "ffdhe4096", 00:23:38.366 "ffdhe6144", 00:23:38.366 "ffdhe8192" 00:23:38.366 ] 00:23:38.366 } 00:23:38.366 }, 00:23:38.366 { 00:23:38.366 "method": "bdev_nvme_set_hotplug", 00:23:38.366 "params": { 00:23:38.366 "period_us": 100000, 00:23:38.366 "enable": false 00:23:38.366 } 00:23:38.366 }, 00:23:38.366 { 00:23:38.366 "method": "bdev_malloc_create", 00:23:38.366 "params": { 00:23:38.366 "name": "malloc0", 00:23:38.366 "num_blocks": 8192, 00:23:38.366 "block_size": 4096, 00:23:38.366 "physical_block_size": 4096, 00:23:38.367 "uuid": "97b13a74-7b68-4ef0-b591-c59a0f68a228", 00:23:38.367 "optimal_io_boundary": 0, 00:23:38.367 "md_size": 0, 00:23:38.367 "dif_type": 0, 00:23:38.367 "dif_is_head_of_md": false, 00:23:38.367 "dif_pi_format": 0 00:23:38.367 } 00:23:38.367 }, 00:23:38.367 { 00:23:38.367 "method": "bdev_wait_for_examine" 00:23:38.367 } 00:23:38.367 ] 00:23:38.367 }, 00:23:38.367 { 00:23:38.367 "subsystem": "nbd", 00:23:38.367 "config": [] 00:23:38.367 }, 00:23:38.367 { 00:23:38.367 "subsystem": "scheduler", 00:23:38.367 "config": [ 00:23:38.367 { 00:23:38.367 "method": "framework_set_scheduler", 00:23:38.367 "params": { 00:23:38.367 "name": "static" 00:23:38.367 } 00:23:38.367 } 00:23:38.367 ] 00:23:38.367 }, 00:23:38.367 { 00:23:38.367 "subsystem": "nvmf", 00:23:38.367 "config": [ 00:23:38.367 { 00:23:38.367 "method": "nvmf_set_config", 00:23:38.367 "params": { 00:23:38.367 "discovery_filter": "match_any", 00:23:38.367 "admin_cmd_passthru": { 00:23:38.367 "identify_ctrlr": false 00:23:38.367 }, 00:23:38.367 "dhchap_digests": [ 00:23:38.367 "sha256", 00:23:38.367 "sha384", 00:23:38.367 "sha512" 00:23:38.367 ], 00:23:38.367 "dhchap_dhgroups": [ 00:23:38.367 "null", 00:23:38.367 "ffdhe2048", 00:23:38.367 "ffdhe3072", 00:23:38.367 "ffdhe4096", 00:23:38.367 "ffdhe6144", 00:23:38.367 "ffdhe8192" 00:23:38.367 ] 00:23:38.367 } 00:23:38.367 }, 00:23:38.367 { 00:23:38.367 "method": "nvmf_set_max_subsystems", 00:23:38.367 "params": { 00:23:38.367 "max_subsystems": 1024 00:23:38.367 } 00:23:38.367 }, 00:23:38.367 { 00:23:38.367 "method": "nvmf_set_crdt", 00:23:38.367 "params": { 00:23:38.367 "crdt1": 0, 00:23:38.367 "crdt2": 0, 00:23:38.367 "crdt3": 0 00:23:38.367 } 00:23:38.367 }, 00:23:38.367 { 00:23:38.367 "method": "nvmf_create_transport", 00:23:38.367 "params": { 00:23:38.367 "trtype": "TCP", 00:23:38.367 "max_queue_depth": 128, 00:23:38.367 "max_io_qpairs_per_ctrlr": 127, 00:23:38.367 "in_capsule_data_size": 4096, 00:23:38.367 "max_io_size": 131072, 00:23:38.367 "io_unit_size": 131072, 00:23:38.367 "max_aq_depth": 128, 00:23:38.367 "num_shared_buffers": 511, 00:23:38.367 "buf_cache_size": 4294967295, 00:23:38.367 "dif_insert_or_strip": false, 00:23:38.367 "zcopy": false, 00:23:38.367 "c2h_success": false, 00:23:38.367 "sock_priority": 0, 00:23:38.367 "abort_timeout_sec": 1, 00:23:38.367 "ack_timeout": 0, 00:23:38.367 "data_wr_pool_size": 0 00:23:38.367 } 00:23:38.367 }, 00:23:38.367 { 00:23:38.367 "method": "nvmf_create_subsystem", 00:23:38.367 "params": { 00:23:38.367 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.367 "allow_any_host": false, 00:23:38.367 "serial_number": "SPDK00000000000001", 00:23:38.367 "model_number": "SPDK bdev Controller", 00:23:38.367 "max_namespaces": 10, 00:23:38.367 "min_cntlid": 1, 00:23:38.367 "max_cntlid": 65519, 00:23:38.367 "ana_reporting": false 00:23:38.367 } 00:23:38.367 }, 00:23:38.367 { 00:23:38.367 "method": "nvmf_subsystem_add_host", 00:23:38.367 "params": { 00:23:38.367 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.367 "host": "nqn.2016-06.io.spdk:host1", 00:23:38.367 "psk": "key0" 00:23:38.367 } 00:23:38.367 }, 00:23:38.367 { 00:23:38.367 "method": "nvmf_subsystem_add_ns", 00:23:38.367 "params": { 00:23:38.367 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.367 "namespace": { 00:23:38.367 "nsid": 1, 00:23:38.367 "bdev_name": "malloc0", 00:23:38.367 "nguid": "97B13A747B684EF0B591C59A0F68A228", 00:23:38.367 "uuid": "97b13a74-7b68-4ef0-b591-c59a0f68a228", 00:23:38.367 "no_auto_visible": false 00:23:38.367 } 00:23:38.367 } 00:23:38.367 }, 00:23:38.367 { 00:23:38.367 "method": "nvmf_subsystem_add_listener", 00:23:38.367 "params": { 00:23:38.367 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.367 "listen_address": { 00:23:38.367 "trtype": "TCP", 00:23:38.367 "adrfam": "IPv4", 00:23:38.367 "traddr": "10.0.0.3", 00:23:38.367 "trsvcid": "4420" 00:23:38.367 }, 00:23:38.367 "secure_channel": true 00:23:38.367 } 00:23:38.367 } 00:23:38.367 ] 00:23:38.367 } 00:23:38.367 ] 00:23:38.367 }' 00:23:38.367 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:38.626 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:38.626 "subsystems": [ 00:23:38.626 { 00:23:38.626 "subsystem": "keyring", 00:23:38.626 "config": [ 00:23:38.626 { 00:23:38.626 "method": "keyring_file_add_key", 00:23:38.626 "params": { 00:23:38.626 "name": "key0", 00:23:38.626 "path": "/tmp/tmp.ihBASWRRuq" 00:23:38.626 } 00:23:38.626 } 00:23:38.626 ] 00:23:38.626 }, 00:23:38.626 { 00:23:38.626 "subsystem": "iobuf", 00:23:38.626 "config": [ 00:23:38.626 { 00:23:38.626 "method": "iobuf_set_options", 00:23:38.626 "params": { 00:23:38.626 "small_pool_count": 8192, 00:23:38.626 "large_pool_count": 1024, 00:23:38.626 "small_bufsize": 8192, 00:23:38.626 "large_bufsize": 135168 00:23:38.626 } 00:23:38.626 } 00:23:38.626 ] 00:23:38.626 }, 00:23:38.626 { 00:23:38.626 "subsystem": "sock", 00:23:38.626 "config": [ 00:23:38.626 { 00:23:38.626 "method": "sock_set_default_impl", 00:23:38.626 "params": { 00:23:38.626 "impl_name": "uring" 00:23:38.626 } 00:23:38.626 }, 00:23:38.626 { 00:23:38.626 "method": "sock_impl_set_options", 00:23:38.626 "params": { 00:23:38.626 "impl_name": "ssl", 00:23:38.626 "recv_buf_size": 4096, 00:23:38.626 "send_buf_size": 4096, 00:23:38.626 "enable_recv_pipe": true, 00:23:38.626 "enable_quickack": false, 00:23:38.626 "enable_placement_id": 0, 00:23:38.626 "enable_zerocopy_send_server": true, 00:23:38.626 "enable_zerocopy_send_client": false, 00:23:38.626 "zerocopy_threshold": 0, 00:23:38.626 "tls_version": 0, 00:23:38.626 "enable_ktls": false 00:23:38.626 } 00:23:38.626 }, 00:23:38.626 { 00:23:38.626 "method": "sock_impl_set_options", 00:23:38.626 "params": { 00:23:38.626 "impl_name": "posix", 00:23:38.626 "recv_buf_size": 2097152, 00:23:38.626 "send_buf_size": 2097152, 00:23:38.626 "enable_recv_pipe": true, 00:23:38.626 "enable_quickack": false, 00:23:38.626 "enable_placement_id": 0, 00:23:38.626 "enable_zerocopy_send_server": true, 00:23:38.626 "enable_zerocopy_send_client": false, 00:23:38.626 "zerocopy_threshold": 0, 00:23:38.626 "tls_version": 0, 00:23:38.626 "enable_ktls": false 00:23:38.626 } 00:23:38.626 }, 00:23:38.626 { 00:23:38.626 "method": "sock_impl_set_options", 00:23:38.626 "params": { 00:23:38.626 "impl_name": "uring", 00:23:38.626 "recv_buf_size": 2097152, 00:23:38.626 "send_buf_size": 2097152, 00:23:38.626 "enable_recv_pipe": true, 00:23:38.626 "enable_quickack": false, 00:23:38.626 "enable_placement_id": 0, 00:23:38.626 "enable_zerocopy_send_server": false, 00:23:38.626 "enable_zerocopy_send_client": false, 00:23:38.626 "zerocopy_threshold": 0, 00:23:38.626 "tls_version": 0, 00:23:38.626 "enable_ktls": false 00:23:38.626 } 00:23:38.626 } 00:23:38.626 ] 00:23:38.626 }, 00:23:38.626 { 00:23:38.626 "subsystem": "vmd", 00:23:38.626 "config": [] 00:23:38.626 }, 00:23:38.626 { 00:23:38.626 "subsystem": "accel", 00:23:38.626 "config": [ 00:23:38.626 { 00:23:38.626 "method": "accel_set_options", 00:23:38.626 "params": { 00:23:38.626 "small_cache_size": 128, 00:23:38.626 "large_cache_size": 16, 00:23:38.626 "task_count": 2048, 00:23:38.626 "sequence_count": 2048, 00:23:38.626 "buf_count": 2048 00:23:38.626 } 00:23:38.626 } 00:23:38.626 ] 00:23:38.626 }, 00:23:38.626 { 00:23:38.626 "subsystem": "bdev", 00:23:38.626 "config": [ 00:23:38.626 { 00:23:38.626 "method": "bdev_set_options", 00:23:38.626 "params": { 00:23:38.626 "bdev_io_pool_size": 65535, 00:23:38.626 "bdev_io_cache_size": 256, 00:23:38.626 "bdev_auto_examine": true, 00:23:38.626 "iobuf_small_cache_size": 128, 00:23:38.626 "iobuf_large_cache_size": 16 00:23:38.626 } 00:23:38.626 }, 00:23:38.626 { 00:23:38.626 "method": "bdev_raid_set_options", 00:23:38.626 "params": { 00:23:38.626 "process_window_size_kb": 1024, 00:23:38.626 "process_max_bandwidth_mb_sec": 0 00:23:38.626 } 00:23:38.626 }, 00:23:38.626 { 00:23:38.626 "method": "bdev_iscsi_set_options", 00:23:38.626 "params": { 00:23:38.626 "timeout_sec": 30 00:23:38.626 } 00:23:38.626 }, 00:23:38.626 { 00:23:38.626 "method": "bdev_nvme_set_options", 00:23:38.626 "params": { 00:23:38.626 "action_on_timeout": "none", 00:23:38.626 "timeout_us": 0, 00:23:38.626 "timeout_admin_us": 0, 00:23:38.626 "keep_alive_timeout_ms": 10000, 00:23:38.626 "arbitration_burst": 0, 00:23:38.626 "low_priority_weight": 0, 00:23:38.626 "medium_priority_weight": 0, 00:23:38.626 "high_priority_weight": 0, 00:23:38.626 "nvme_adminq_poll_period_us": 10000, 00:23:38.626 "nvme_ioq_poll_period_us": 0, 00:23:38.626 "io_queue_requests": 512, 00:23:38.626 "delay_cmd_submit": true, 00:23:38.626 "transport_retry_count": 4, 00:23:38.626 "bdev_retry_count": 3, 00:23:38.626 "transport_ack_timeout": 0, 00:23:38.626 "ctrlr_loss_timeout_sec": 0, 00:23:38.626 "reconnect_delay_sec": 0, 00:23:38.626 "fast_io_fail_timeout_sec": 0, 00:23:38.626 "disable_auto_failback": false, 00:23:38.626 "generate_uuids": false, 00:23:38.626 "transport_tos": 0, 00:23:38.626 "nvme_error_stat": false, 00:23:38.626 "rdma_srq_size": 0, 00:23:38.626 "io_path_stat": false, 00:23:38.626 "allow_accel_sequence": false, 00:23:38.626 "rdma_max_cq_size": 0, 00:23:38.626 "rdma_cm_event_timeout_ms": 0, 00:23:38.626 "dhchap_digests": [ 00:23:38.626 "sha256", 00:23:38.626 "sha384", 00:23:38.626 "sha512" 00:23:38.626 ], 00:23:38.626 "dhchap_dhgroups": [ 00:23:38.626 "null", 00:23:38.626 "ffdhe2048", 00:23:38.626 "ffdhe3072", 00:23:38.626 "ffdhe4096", 00:23:38.626 "ffdhe6144", 00:23:38.626 "ffdhe8192" 00:23:38.626 ] 00:23:38.626 } 00:23:38.626 }, 00:23:38.626 { 00:23:38.626 "method": "bdev_nvme_attach_controller", 00:23:38.626 "params": { 00:23:38.626 "name": "TLSTEST", 00:23:38.626 "trtype": "TCP", 00:23:38.626 "adrfam": "IPv4", 00:23:38.626 "traddr": "10.0.0.3", 00:23:38.626 "trsvcid": "4420", 00:23:38.626 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.626 "prchk_reftag": false, 00:23:38.626 "prchk_guard": false, 00:23:38.626 "ctrlr_loss_timeout_sec": 0, 00:23:38.626 "reconnect_delay_sec": 0, 00:23:38.626 "fast_io_fail_timeout_sec": 0, 00:23:38.626 "psk": "key0", 00:23:38.626 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:38.626 "hdgst": false, 00:23:38.626 "ddgst": false, 00:23:38.626 "multipath": "multipath" 00:23:38.626 } 00:23:38.626 }, 00:23:38.626 { 00:23:38.626 "method": "bdev_nvme_set_hotplug", 00:23:38.626 "params": { 00:23:38.626 "period_us": 100000, 00:23:38.626 "enable": false 00:23:38.626 } 00:23:38.626 }, 00:23:38.626 { 00:23:38.626 "method": "bdev_wait_for_examine" 00:23:38.626 } 00:23:38.626 ] 00:23:38.626 }, 00:23:38.626 { 00:23:38.626 "subsystem": "nbd", 00:23:38.626 "config": [] 00:23:38.626 } 00:23:38.626 ] 00:23:38.626 }' 00:23:38.627 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72421 00:23:38.627 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72421 ']' 00:23:38.627 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72421 00:23:38.627 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:38.627 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:38.627 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72421 00:23:38.885 killing process with pid 72421 00:23:38.885 Received shutdown signal, test time was about 10.000000 seconds 00:23:38.885 00:23:38.885 Latency(us) 00:23:38.885 [2024-10-17T19:25:48.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.885 [2024-10-17T19:25:48.143Z] =================================================================================================================== 00:23:38.885 [2024-10-17T19:25:48.143Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:38.885 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:38.885 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:38.885 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72421' 00:23:38.885 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72421 00:23:38.885 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72421 00:23:38.885 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 72367 00:23:38.885 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72367 ']' 00:23:38.885 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72367 00:23:38.885 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:38.885 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:38.885 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72367 00:23:38.885 killing process with pid 72367 00:23:38.885 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:38.885 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:38.885 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72367' 00:23:38.885 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72367 00:23:38.885 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72367 00:23:39.452 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:39.452 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:39.452 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:39.452 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.452 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:39.452 "subsystems": [ 00:23:39.452 { 00:23:39.452 "subsystem": "keyring", 00:23:39.452 "config": [ 00:23:39.452 { 00:23:39.452 "method": "keyring_file_add_key", 00:23:39.452 "params": { 00:23:39.452 "name": "key0", 00:23:39.452 "path": "/tmp/tmp.ihBASWRRuq" 00:23:39.452 } 00:23:39.452 } 00:23:39.452 ] 00:23:39.452 }, 00:23:39.452 { 00:23:39.452 "subsystem": "iobuf", 00:23:39.452 "config": [ 00:23:39.452 { 00:23:39.452 "method": "iobuf_set_options", 00:23:39.452 "params": { 00:23:39.452 "small_pool_count": 8192, 00:23:39.452 "large_pool_count": 1024, 00:23:39.452 "small_bufsize": 8192, 00:23:39.452 "large_bufsize": 135168 00:23:39.452 } 00:23:39.452 } 00:23:39.452 ] 00:23:39.452 }, 00:23:39.452 { 00:23:39.452 "subsystem": "sock", 00:23:39.452 "config": [ 00:23:39.452 { 00:23:39.452 "method": "sock_set_default_impl", 00:23:39.452 "params": { 00:23:39.452 "impl_name": "uring" 00:23:39.452 } 00:23:39.452 }, 00:23:39.452 { 00:23:39.452 "method": "sock_impl_set_options", 00:23:39.452 "params": { 00:23:39.452 "impl_name": "ssl", 00:23:39.452 "recv_buf_size": 4096, 00:23:39.452 "send_buf_size": 4096, 00:23:39.452 "enable_recv_pipe": true, 00:23:39.452 "enable_quickack": false, 00:23:39.452 "enable_placement_id": 0, 00:23:39.452 "enable_zerocopy_send_server": true, 00:23:39.452 "enable_zerocopy_send_client": false, 00:23:39.452 "zerocopy_threshold": 0, 00:23:39.452 "tls_version": 0, 00:23:39.452 "enable_ktls": false 00:23:39.452 } 00:23:39.452 }, 00:23:39.452 { 00:23:39.452 "method": "sock_impl_set_options", 00:23:39.452 "params": { 00:23:39.452 "impl_name": "posix", 00:23:39.452 "recv_buf_size": 2097152, 00:23:39.452 "send_buf_size": 2097152, 00:23:39.452 "enable_recv_pipe": true, 00:23:39.452 "enable_quickack": false, 00:23:39.452 "enable_placement_id": 0, 00:23:39.452 "enable_zerocopy_send_server": true, 00:23:39.452 "enable_zerocopy_send_client": false, 00:23:39.452 "zerocopy_threshold": 0, 00:23:39.452 "tls_version": 0, 00:23:39.452 "enable_ktls": false 00:23:39.452 } 00:23:39.452 }, 00:23:39.452 { 00:23:39.452 "method": "sock_impl_set_options", 00:23:39.452 "params": { 00:23:39.452 "impl_name": "uring", 00:23:39.452 "recv_buf_size": 2097152, 00:23:39.452 "send_buf_size": 2097152, 00:23:39.452 "enable_recv_pipe": true, 00:23:39.452 "enable_quickack": false, 00:23:39.452 "enable_placement_id": 0, 00:23:39.452 "enable_zerocopy_send_server": false, 00:23:39.452 "enable_zerocopy_send_client": false, 00:23:39.452 "zerocopy_threshold": 0, 00:23:39.452 "tls_version": 0, 00:23:39.452 "enable_ktls": false 00:23:39.452 } 00:23:39.452 } 00:23:39.452 ] 00:23:39.452 }, 00:23:39.452 { 00:23:39.452 "subsystem": "vmd", 00:23:39.452 "config": [] 00:23:39.452 }, 00:23:39.452 { 00:23:39.452 "subsystem": "accel", 00:23:39.452 "config": [ 00:23:39.452 { 00:23:39.452 "method": "accel_set_options", 00:23:39.452 "params": { 00:23:39.452 "small_cache_size": 128, 00:23:39.452 "large_cache_size": 16, 00:23:39.452 "task_count": 2048, 00:23:39.452 "sequence_count": 2048, 00:23:39.452 "buf_count": 2048 00:23:39.452 } 00:23:39.452 } 00:23:39.452 ] 00:23:39.452 }, 00:23:39.452 { 00:23:39.452 "subsystem": "bdev", 00:23:39.452 "config": [ 00:23:39.452 { 00:23:39.452 "method": "bdev_set_options", 00:23:39.452 "params": { 00:23:39.452 "bdev_io_pool_size": 65535, 00:23:39.452 "bdev_io_cache_size": 256, 00:23:39.452 "bdev_auto_examine": true, 00:23:39.452 "iobuf_small_cache_size": 128, 00:23:39.452 "iobuf_large_cache_size": 16 00:23:39.452 } 00:23:39.452 }, 00:23:39.452 { 00:23:39.452 "method": "bdev_raid_set_options", 00:23:39.452 "params": { 00:23:39.452 "process_window_size_kb": 1024, 00:23:39.452 "process_max_bandwidth_mb_sec": 0 00:23:39.452 } 00:23:39.452 }, 00:23:39.452 { 00:23:39.452 "method": "bdev_iscsi_set_options", 00:23:39.452 "params": { 00:23:39.452 "timeout_sec": 30 00:23:39.452 } 00:23:39.452 }, 00:23:39.452 { 00:23:39.452 "method": "bdev_nvme_set_options", 00:23:39.452 "params": { 00:23:39.452 "action_on_timeout": "none", 00:23:39.452 "timeout_us": 0, 00:23:39.452 "timeout_admin_us": 0, 00:23:39.452 "keep_alive_timeout_ms": 10000, 00:23:39.452 "arbitration_burst": 0, 00:23:39.452 "low_priority_weight": 0, 00:23:39.452 "medium_priority_weight": 0, 00:23:39.452 "high_priority_weight": 0, 00:23:39.452 "nvme_adminq_poll_period_us": 10000, 00:23:39.452 "nvme_ioq_poll_period_us": 0, 00:23:39.452 "io_queue_requests": 0, 00:23:39.452 "delay_cmd_submit": true, 00:23:39.452 "transport_retry_count": 4, 00:23:39.452 "bdev_retry_count": 3, 00:23:39.452 "transport_ack_timeout": 0, 00:23:39.452 "ctrlr_loss_timeout_sec": 0, 00:23:39.452 "reconnect_delay_sec": 0, 00:23:39.452 "fast_io_fail_timeout_sec": 0, 00:23:39.452 "disable_auto_failback": false, 00:23:39.452 "generate_uuids": false, 00:23:39.452 "transport_tos": 0, 00:23:39.452 "nvme_error_stat": false, 00:23:39.452 "rdma_srq_size": 0, 00:23:39.452 "io_path_stat": false, 00:23:39.452 "allow_accel_sequence": false, 00:23:39.452 "rdma_max_cq_size": 0, 00:23:39.452 "rdma_cm_event_timeout_ms": 0, 00:23:39.452 "dhchap_digests": [ 00:23:39.452 "sha256", 00:23:39.452 "sha384", 00:23:39.452 "sha512" 00:23:39.452 ], 00:23:39.452 "dhchap_dhgroups": [ 00:23:39.452 "null", 00:23:39.452 "ffdhe2048", 00:23:39.452 "ffdhe3072", 00:23:39.452 "ffdhe4096", 00:23:39.452 "ffdhe6144", 00:23:39.452 "ffdhe8192" 00:23:39.452 ] 00:23:39.452 } 00:23:39.452 }, 00:23:39.452 { 00:23:39.452 "method": "bdev_nvme_set_hotplug", 00:23:39.452 "params": { 00:23:39.452 "period_us": 100000, 00:23:39.452 "enable": false 00:23:39.452 } 00:23:39.452 }, 00:23:39.452 { 00:23:39.452 "method": "bdev_malloc_create", 00:23:39.452 "params": { 00:23:39.452 "name": "malloc0", 00:23:39.452 "num_blocks": 8192, 00:23:39.452 "block_size": 4096, 00:23:39.452 "physical_block_size": 4096, 00:23:39.452 "uuid": "97b13a74-7b68-4ef0-b591-c59a0f68a228", 00:23:39.452 "optimal_io_boundary": 0, 00:23:39.452 "md_size": 0, 00:23:39.452 "dif_type": 0, 00:23:39.452 "dif_is_head_of_md": false, 00:23:39.452 "dif_pi_format": 0 00:23:39.452 } 00:23:39.452 }, 00:23:39.452 { 00:23:39.452 "method": "bdev_wait_for_examine" 00:23:39.452 } 00:23:39.452 ] 00:23:39.452 }, 00:23:39.452 { 00:23:39.453 "subsystem": "nbd", 00:23:39.453 "config": [] 00:23:39.453 }, 00:23:39.453 { 00:23:39.453 "subsystem": "scheduler", 00:23:39.453 "config": [ 00:23:39.453 { 00:23:39.453 "method": "framework_set_scheduler", 00:23:39.453 "params": { 00:23:39.453 "name": "static" 00:23:39.453 } 00:23:39.453 } 00:23:39.453 ] 00:23:39.453 }, 00:23:39.453 { 00:23:39.453 "subsystem": "nvmf", 00:23:39.453 "config": [ 00:23:39.453 { 00:23:39.453 "method": "nvmf_set_config", 00:23:39.453 "params": { 00:23:39.453 "discovery_filter": "match_any", 00:23:39.453 "admin_cmd_passthru": { 00:23:39.453 "identify_ctrlr": false 00:23:39.453 }, 00:23:39.453 "dhchap_digests": [ 00:23:39.453 "sha256", 00:23:39.453 "sha384", 00:23:39.453 "sha512" 00:23:39.453 ], 00:23:39.453 "dhchap_dhgroups": [ 00:23:39.453 "null", 00:23:39.453 "ffdhe2048", 00:23:39.453 "ffdhe3072", 00:23:39.453 "ffdhe4096", 00:23:39.453 "ffdhe6144", 00:23:39.453 "ffdhe8192" 00:23:39.453 ] 00:23:39.453 } 00:23:39.453 }, 00:23:39.453 { 00:23:39.453 "method": "nvmf_set_max_subsystems", 00:23:39.453 "params": { 00:23:39.453 "max_subsystems": 1024 00:23:39.453 } 00:23:39.453 }, 00:23:39.453 { 00:23:39.453 "method": "nvmf_set_crdt", 00:23:39.453 "params": { 00:23:39.453 "crdt1": 0, 00:23:39.453 "crdt2": 0, 00:23:39.453 "crdt3": 0 00:23:39.453 } 00:23:39.453 }, 00:23:39.453 { 00:23:39.453 "method": "nvmf_create_transport", 00:23:39.453 "params": { 00:23:39.453 "trtype": "TCP", 00:23:39.453 "max_queue_depth": 128, 00:23:39.453 "max_io_qpairs_per_ctrlr": 127, 00:23:39.453 "in_capsule_data_size": 4096, 00:23:39.453 "max_io_size": 131072, 00:23:39.453 "io_unit_size": 131072, 00:23:39.453 "max_aq_depth": 128, 00:23:39.453 "num_shared_buffers": 511, 00:23:39.453 "buf_cache_size": 4294967295, 00:23:39.453 "dif_insert_or_strip": false, 00:23:39.453 "zcopy": false, 00:23:39.453 "c2h_success": false, 00:23:39.453 "sock_priority": 0, 00:23:39.453 "abort_timeout_sec": 1, 00:23:39.453 "ack_timeout": 0, 00:23:39.453 "data_wr_pool_size": 0 00:23:39.453 } 00:23:39.453 }, 00:23:39.453 { 00:23:39.453 "method": "nvmf_create_subsystem", 00:23:39.453 "params": { 00:23:39.453 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.453 "allow_any_host": false, 00:23:39.453 "serial_number": "SPDK00000000000001", 00:23:39.453 "model_number": "SPDK bdev Controller", 00:23:39.453 "max_namespaces": 10, 00:23:39.453 "min_cntlid": 1, 00:23:39.453 "max_cntlid": 65519, 00:23:39.453 "ana_reporting": false 00:23:39.453 } 00:23:39.453 }, 00:23:39.453 { 00:23:39.453 "method": "nvmf_subsystem_add_host", 00:23:39.453 "params": { 00:23:39.453 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.453 "host": "nqn.2016-06.io.spdk:host1", 00:23:39.453 "psk": "key0" 00:23:39.453 } 00:23:39.453 }, 00:23:39.453 { 00:23:39.453 "method": "nvmf_subsystem_add_ns", 00:23:39.453 "params": { 00:23:39.453 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.453 "namespace": { 00:23:39.453 "nsid": 1, 00:23:39.453 "bdev_name": "malloc0", 00:23:39.453 "nguid": "97B13A747B684EF0B591C59A0F68A228", 00:23:39.453 "uuid": "97b13a74-7b68-4ef0-b591-c59a0f68a228", 00:23:39.453 "no_auto_visible": false 00:23:39.453 } 00:23:39.453 } 00:23:39.453 }, 00:23:39.453 { 00:23:39.453 "method": "nvmf_subsystem_add_listener", 00:23:39.453 "params": { 00:23:39.453 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.453 "listen_address": { 00:23:39.453 "trtype": "TCP", 00:23:39.453 "adrfam": "IPv4", 00:23:39.453 "traddr": "10.0.0.3", 00:23:39.453 "trsvcid": "4420" 00:23:39.453 }, 00:23:39.453 "secure_channel": true 00:23:39.453 } 00:23:39.453 } 00:23:39.453 ] 00:23:39.453 } 00:23:39.453 ] 00:23:39.453 }' 00:23:39.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.453 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72476 00:23:39.453 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72476 00:23:39.453 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72476 ']' 00:23:39.453 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:39.453 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.453 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:39.453 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.453 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:39.453 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.453 [2024-10-17 19:25:48.481532] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:23:39.453 [2024-10-17 19:25:48.481872] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.453 [2024-10-17 19:25:48.621024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.453 [2024-10-17 19:25:48.696348] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.453 [2024-10-17 19:25:48.696417] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.453 [2024-10-17 19:25:48.696430] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.453 [2024-10-17 19:25:48.696439] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.453 [2024-10-17 19:25:48.696446] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.453 [2024-10-17 19:25:48.696957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.712 [2024-10-17 19:25:48.886393] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:39.970 [2024-10-17 19:25:48.984370] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.970 [2024-10-17 19:25:49.016265] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:39.970 [2024-10-17 19:25:49.016486] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:40.538 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:40.538 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:40.538 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:40.538 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:40.538 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.538 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.538 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72508 00:23:40.538 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72508 /var/tmp/bdevperf.sock 00:23:40.538 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72508 ']' 00:23:40.538 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.538 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:40.538 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:40.538 "subsystems": [ 00:23:40.538 { 00:23:40.538 "subsystem": "keyring", 00:23:40.538 "config": [ 00:23:40.538 { 00:23:40.538 "method": "keyring_file_add_key", 00:23:40.538 "params": { 00:23:40.538 "name": "key0", 00:23:40.538 "path": "/tmp/tmp.ihBASWRRuq" 00:23:40.538 } 00:23:40.538 } 00:23:40.538 ] 00:23:40.538 }, 00:23:40.538 { 00:23:40.538 "subsystem": "iobuf", 00:23:40.538 "config": [ 00:23:40.538 { 00:23:40.538 "method": "iobuf_set_options", 00:23:40.538 "params": { 00:23:40.538 "small_pool_count": 8192, 00:23:40.538 "large_pool_count": 1024, 00:23:40.538 "small_bufsize": 8192, 00:23:40.538 "large_bufsize": 135168 00:23:40.538 } 00:23:40.538 } 00:23:40.538 ] 00:23:40.538 }, 00:23:40.538 { 00:23:40.538 "subsystem": "sock", 00:23:40.538 "config": [ 00:23:40.538 { 00:23:40.538 "method": "sock_set_default_impl", 00:23:40.538 "params": { 00:23:40.538 "impl_name": "uring" 00:23:40.538 } 00:23:40.538 }, 00:23:40.538 { 00:23:40.538 "method": "sock_impl_set_options", 00:23:40.538 "params": { 00:23:40.538 "impl_name": "ssl", 00:23:40.538 "recv_buf_size": 4096, 00:23:40.538 "send_buf_size": 4096, 00:23:40.538 "enable_recv_pipe": true, 00:23:40.538 "enable_quickack": false, 00:23:40.538 "enable_placement_id": 0, 00:23:40.538 "enable_zerocopy_send_server": true, 00:23:40.538 "enable_zerocopy_send_client": false, 00:23:40.538 "zerocopy_threshold": 0, 00:23:40.538 "tls_version": 0, 00:23:40.538 "enable_ktls": false 00:23:40.538 } 00:23:40.538 }, 00:23:40.538 { 00:23:40.538 "method": "sock_impl_set_options", 00:23:40.538 "params": { 00:23:40.538 "impl_name": "posix", 00:23:40.538 "recv_buf_size": 2097152, 00:23:40.538 "send_buf_size": 2097152, 00:23:40.538 "enable_recv_pipe": true, 00:23:40.538 "enable_quickack": false, 00:23:40.538 "enable_placement_id": 0, 00:23:40.538 "enable_zerocopy_send_server": true, 00:23:40.538 "enable_zerocopy_send_client": false, 00:23:40.538 "zerocopy_threshold": 0, 00:23:40.538 "tls_version": 0, 00:23:40.538 "enable_ktls": false 00:23:40.538 } 00:23:40.538 }, 00:23:40.538 { 00:23:40.538 "method": "sock_impl_set_options", 00:23:40.538 "params": { 00:23:40.538 "impl_name": "uring", 00:23:40.538 "recv_buf_size": 2097152, 00:23:40.538 "send_buf_size": 2097152, 00:23:40.538 "enable_recv_pipe": true, 00:23:40.538 "enable_quickack": false, 00:23:40.538 "enable_placement_id": 0, 00:23:40.538 "enable_zerocopy_send_server": false, 00:23:40.538 "enable_zerocopy_send_client": false, 00:23:40.538 "zerocopy_threshold": 0, 00:23:40.538 "tls_version": 0, 00:23:40.538 "enable_ktls": false 00:23:40.538 } 00:23:40.538 } 00:23:40.538 ] 00:23:40.538 }, 00:23:40.538 { 00:23:40.538 "subsystem": "vmd", 00:23:40.538 "config": [] 00:23:40.538 }, 00:23:40.538 { 00:23:40.538 "subsystem": "accel", 00:23:40.538 "config": [ 00:23:40.538 { 00:23:40.538 "method": "accel_set_options", 00:23:40.538 "params": { 00:23:40.538 "small_cache_size": 128, 00:23:40.538 "large_cache_size": 16, 00:23:40.538 "task_count": 2048, 00:23:40.538 "sequence_count": 2048, 00:23:40.538 "buf_count": 2048 00:23:40.538 } 00:23:40.538 } 00:23:40.538 ] 00:23:40.538 }, 00:23:40.538 { 00:23:40.538 "subsystem": "bdev", 00:23:40.538 "config": [ 00:23:40.538 { 00:23:40.538 "method": "bdev_set_options", 00:23:40.538 "params": { 00:23:40.538 "bdev_io_pool_size": 65535, 00:23:40.538 "bdev_io_cache_size": 256, 00:23:40.538 "bdev_auto_examine": true, 00:23:40.538 "iobuf_small_cache_size": 128, 00:23:40.538 "iobuf_large_cache_size": 16 00:23:40.538 } 00:23:40.538 }, 00:23:40.538 { 00:23:40.538 "method": "bdev_raid_set_options", 00:23:40.538 "params": { 00:23:40.538 "process_window_size_kb": 1024, 00:23:40.538 "process_max_bandwidth_mb_sec": 0 00:23:40.538 } 00:23:40.538 }, 00:23:40.538 { 00:23:40.538 "method": "bdev_iscsi_set_options", 00:23:40.538 "params": { 00:23:40.538 "timeout_sec": 30 00:23:40.538 } 00:23:40.538 }, 00:23:40.538 { 00:23:40.538 "method": "bdev_nvme_set_options", 00:23:40.538 "params": { 00:23:40.538 "action_on_timeout": "none", 00:23:40.538 "timeout_us": 0, 00:23:40.538 "timeout_admin_us": 0, 00:23:40.538 "keep_alive_timeout_ms": 10000, 00:23:40.538 "arbitration_burst": 0, 00:23:40.538 "low_priority_weight": 0, 00:23:40.538 "medium_priority_weight": 0, 00:23:40.538 "high_priority_weight": 0, 00:23:40.538 "nvme_adminq_poll_period_us": 10000, 00:23:40.538 "nvme_ioq_poll_period_us": 0, 00:23:40.538 "io_queue_requests": 512, 00:23:40.538 "delay_cmd_submit": true, 00:23:40.538 "transport_retry_count": 4, 00:23:40.538 "bdev_retry_count": 3, 00:23:40.538 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:40.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.539 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.539 "transport_ack_timeout": 0, 00:23:40.539 "ctrlr_loss_timeout_sec": 0, 00:23:40.539 "reconnect_delay_sec": 0, 00:23:40.539 "fast_io_fail_timeout_sec": 0, 00:23:40.539 "disable_auto_failback": false, 00:23:40.539 "generate_uuids": false, 00:23:40.539 "transport_tos": 0, 00:23:40.539 "nvme_error_stat": false, 00:23:40.539 "rdma_srq_size": 0, 00:23:40.539 "io_path_stat": false, 00:23:40.539 "allow_accel_sequence": false, 00:23:40.539 "rdma_max_cq_size": 0, 00:23:40.539 "rdma_cm_event_timeout_ms": 0, 00:23:40.539 "dhchap_digests": [ 00:23:40.539 "sha256", 00:23:40.539 "sha384", 00:23:40.539 "sha512" 00:23:40.539 ], 00:23:40.539 "dhchap_dhgroups": [ 00:23:40.539 "null", 00:23:40.539 "ffdhe2048", 00:23:40.539 "ffdhe3072", 00:23:40.539 "ffdhe4096", 00:23:40.539 "ffdhe6144", 00:23:40.539 "ffdhe8192" 00:23:40.539 ] 00:23:40.539 } 00:23:40.539 }, 00:23:40.539 { 00:23:40.539 "method": "bdev_nvme_attach_controller", 00:23:40.539 "params": { 00:23:40.539 "name": "TLSTEST", 00:23:40.539 "trtype": "TCP", 00:23:40.539 "adrfam": "IPv4", 00:23:40.539 "traddr": "10.0.0.3", 00:23:40.539 "trsvcid": "4420", 00:23:40.539 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.539 "prchk_reftag": false, 00:23:40.539 "prchk_guard": false, 00:23:40.539 "ctrlr_loss_timeout_sec": 0, 00:23:40.539 "reconnect_delay_sec": 0, 00:23:40.539 "fast_io_fail_timeout_sec": 0, 00:23:40.539 "psk": "key0", 00:23:40.539 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:40.539 "hdgst": false, 00:23:40.539 "ddgst": false, 00:23:40.539 "multipath": "multipath" 00:23:40.539 } 00:23:40.539 }, 00:23:40.539 { 00:23:40.539 "method": "bdev_nvme_set_hotplug", 00:23:40.539 "params": { 00:23:40.539 "period_us": 100000, 00:23:40.539 "enable": false 00:23:40.539 } 00:23:40.539 }, 00:23:40.539 { 00:23:40.539 "method": "bdev_wait_for_examine" 00:23:40.539 } 00:23:40.539 ] 00:23:40.539 }, 00:23:40.539 { 00:23:40.539 "subsystem": "nbd", 00:23:40.539 "config": [] 00:23:40.539 } 00:23:40.539 ] 00:23:40.539 }' 00:23:40.539 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:40.539 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.539 [2024-10-17 19:25:49.697870] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:23:40.539 [2024-10-17 19:25:49.699332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72508 ] 00:23:40.797 [2024-10-17 19:25:49.835564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.797 [2024-10-17 19:25:49.914668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.797 [2024-10-17 19:25:50.052021] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:41.054 [2024-10-17 19:25:50.102280] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:41.620 19:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:41.620 19:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:41.620 19:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:41.878 Running I/O for 10 seconds... 00:23:43.746 3756.00 IOPS, 14.67 MiB/s [2024-10-17T19:25:53.940Z] 3797.50 IOPS, 14.83 MiB/s [2024-10-17T19:25:55.317Z] 3779.33 IOPS, 14.76 MiB/s [2024-10-17T19:25:56.252Z] 3809.50 IOPS, 14.88 MiB/s [2024-10-17T19:25:57.187Z] 3826.20 IOPS, 14.95 MiB/s [2024-10-17T19:25:58.172Z] 3849.00 IOPS, 15.04 MiB/s [2024-10-17T19:25:59.105Z] 3850.86 IOPS, 15.04 MiB/s [2024-10-17T19:26:00.042Z] 3853.38 IOPS, 15.05 MiB/s [2024-10-17T19:26:01.013Z] 3850.56 IOPS, 15.04 MiB/s [2024-10-17T19:26:01.013Z] 3853.10 IOPS, 15.05 MiB/s 00:23:51.755 Latency(us) 00:23:51.755 [2024-10-17T19:26:01.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.755 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:51.755 Verification LBA range: start 0x0 length 0x2000 00:23:51.755 TLSTESTn1 : 10.02 3859.18 15.07 0.00 0.00 33109.43 5808.87 26333.56 00:23:51.755 [2024-10-17T19:26:01.013Z] =================================================================================================================== 00:23:51.755 [2024-10-17T19:26:01.013Z] Total : 3859.18 15.07 0.00 0.00 33109.43 5808.87 26333.56 00:23:51.755 { 00:23:51.755 "results": [ 00:23:51.755 { 00:23:51.755 "job": "TLSTESTn1", 00:23:51.755 "core_mask": "0x4", 00:23:51.755 "workload": "verify", 00:23:51.755 "status": "finished", 00:23:51.755 "verify_range": { 00:23:51.755 "start": 0, 00:23:51.755 "length": 8192 00:23:51.755 }, 00:23:51.755 "queue_depth": 128, 00:23:51.755 "io_size": 4096, 00:23:51.755 "runtime": 10.017424, 00:23:51.755 "iops": 3859.1757721346326, 00:23:51.755 "mibps": 15.074905359900908, 00:23:51.755 "io_failed": 0, 00:23:51.755 "io_timeout": 0, 00:23:51.755 "avg_latency_us": 33109.43411620016, 00:23:51.755 "min_latency_us": 5808.872727272727, 00:23:51.755 "max_latency_us": 26333.556363636362 00:23:51.755 } 00:23:51.755 ], 00:23:51.755 "core_count": 1 00:23:51.755 } 00:23:51.755 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:51.755 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72508 00:23:51.755 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72508 ']' 00:23:51.755 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72508 00:23:51.755 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:51.755 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:51.755 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72508 00:23:51.755 killing process with pid 72508 00:23:51.755 Received shutdown signal, test time was about 10.000000 seconds 00:23:51.755 00:23:51.755 Latency(us) 00:23:51.755 [2024-10-17T19:26:01.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.755 [2024-10-17T19:26:01.013Z] =================================================================================================================== 00:23:51.755 [2024-10-17T19:26:01.013Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:51.756 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:51.756 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:51.756 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72508' 00:23:51.756 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72508 00:23:51.756 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72508 00:23:52.014 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72476 00:23:52.014 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72476 ']' 00:23:52.014 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72476 00:23:52.014 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:52.014 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:52.014 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72476 00:23:52.014 killing process with pid 72476 00:23:52.014 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:52.014 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:52.014 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72476' 00:23:52.014 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72476 00:23:52.014 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72476 00:23:52.272 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:52.272 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:52.272 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:52.272 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.272 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72648 00:23:52.272 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:52.272 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72648 00:23:52.272 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72648 ']' 00:23:52.272 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.272 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:52.272 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.272 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:52.272 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.531 [2024-10-17 19:26:01.576088] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:23:52.531 [2024-10-17 19:26:01.576224] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.531 [2024-10-17 19:26:01.719154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.531 [2024-10-17 19:26:01.788347] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.788 [2024-10-17 19:26:01.788724] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.788 [2024-10-17 19:26:01.788753] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.788 [2024-10-17 19:26:01.788765] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.788 [2024-10-17 19:26:01.788777] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.788 [2024-10-17 19:26:01.789302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.789 [2024-10-17 19:26:01.847631] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:52.789 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:52.789 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:52.789 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:52.789 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:52.789 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.789 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.789 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.ihBASWRRuq 00:23:52.789 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ihBASWRRuq 00:23:52.789 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:53.047 [2024-10-17 19:26:02.211085] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.047 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:53.304 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:23:53.871 [2024-10-17 19:26:02.855255] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:53.871 [2024-10-17 19:26:02.855556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:53.871 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:54.129 malloc0 00:23:54.129 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:54.387 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ihBASWRRuq 00:23:54.645 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:54.903 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72696 00:23:54.903 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:54.903 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:54.903 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72696 /var/tmp/bdevperf.sock 00:23:54.903 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72696 ']' 00:23:54.903 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.903 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:54.903 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.903 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:54.903 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.161 [2024-10-17 19:26:04.177295] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:23:55.161 [2024-10-17 19:26:04.177637] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72696 ] 00:23:55.161 [2024-10-17 19:26:04.316056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.161 [2024-10-17 19:26:04.391596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.420 [2024-10-17 19:26:04.465157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:55.420 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:55.420 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:55.420 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ihBASWRRuq 00:23:55.677 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:55.946 [2024-10-17 19:26:05.194780] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:56.214 nvme0n1 00:23:56.214 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:56.214 Running I/O for 1 seconds... 00:23:57.587 3709.00 IOPS, 14.49 MiB/s 00:23:57.587 Latency(us) 00:23:57.587 [2024-10-17T19:26:06.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.587 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:57.587 Verification LBA range: start 0x0 length 0x2000 00:23:57.587 nvme0n1 : 1.02 3763.20 14.70 0.00 0.00 33628.00 7000.44 34555.35 00:23:57.587 [2024-10-17T19:26:06.845Z] =================================================================================================================== 00:23:57.587 [2024-10-17T19:26:06.845Z] Total : 3763.20 14.70 0.00 0.00 33628.00 7000.44 34555.35 00:23:57.587 { 00:23:57.587 "results": [ 00:23:57.587 { 00:23:57.587 "job": "nvme0n1", 00:23:57.587 "core_mask": "0x2", 00:23:57.587 "workload": "verify", 00:23:57.587 "status": "finished", 00:23:57.587 "verify_range": { 00:23:57.587 "start": 0, 00:23:57.587 "length": 8192 00:23:57.587 }, 00:23:57.587 "queue_depth": 128, 00:23:57.587 "io_size": 4096, 00:23:57.587 "runtime": 1.019876, 00:23:57.587 "iops": 3763.2025854123444, 00:23:57.587 "mibps": 14.70001009926697, 00:23:57.587 "io_failed": 0, 00:23:57.587 "io_timeout": 0, 00:23:57.587 "avg_latency_us": 33628.00282723009, 00:23:57.587 "min_latency_us": 7000.436363636363, 00:23:57.587 "max_latency_us": 34555.34545454545 00:23:57.587 } 00:23:57.587 ], 00:23:57.587 "core_count": 1 00:23:57.587 } 00:23:57.587 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72696 00:23:57.587 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72696 ']' 00:23:57.587 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72696 00:23:57.587 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:57.587 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:57.587 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72696 00:23:57.587 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:57.587 killing process with pid 72696 00:23:57.587 Received shutdown signal, test time was about 1.000000 seconds 00:23:57.587 00:23:57.587 Latency(us) 00:23:57.587 [2024-10-17T19:26:06.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.587 [2024-10-17T19:26:06.845Z] =================================================================================================================== 00:23:57.587 [2024-10-17T19:26:06.845Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:57.587 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:57.587 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72696' 00:23:57.587 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72696 00:23:57.587 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72696 00:23:57.587 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72648 00:23:57.587 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72648 ']' 00:23:57.587 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72648 00:23:57.587 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:57.587 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:57.587 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72648 00:23:57.587 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:57.587 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:57.587 killing process with pid 72648 00:23:57.587 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72648' 00:23:57.587 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72648 00:23:57.587 19:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72648 00:23:57.846 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:57.846 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:57.846 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:57.846 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.846 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72745 00:23:57.846 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:57.846 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72745 00:23:57.846 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72745 ']' 00:23:57.846 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.846 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:57.846 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.846 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:57.846 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.105 [2024-10-17 19:26:07.122204] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:23:58.105 [2024-10-17 19:26:07.122342] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.105 [2024-10-17 19:26:07.266878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.105 [2024-10-17 19:26:07.337271] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.105 [2024-10-17 19:26:07.337359] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.105 [2024-10-17 19:26:07.337374] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:58.106 [2024-10-17 19:26:07.337385] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:58.106 [2024-10-17 19:26:07.337395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.106 [2024-10-17 19:26:07.337961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.369 [2024-10-17 19:26:07.399067] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:58.369 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:58.369 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:58.369 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:58.369 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:58.369 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.369 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.369 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:58.369 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.369 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.369 [2024-10-17 19:26:07.525011] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:58.369 malloc0 00:23:58.369 [2024-10-17 19:26:07.557540] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:58.369 [2024-10-17 19:26:07.557876] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:58.369 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.369 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72764 00:23:58.369 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:58.369 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72764 /var/tmp/bdevperf.sock 00:23:58.369 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72764 ']' 00:23:58.369 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:58.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:58.369 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:58.369 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:58.369 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:58.369 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.627 [2024-10-17 19:26:07.648680] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:23:58.627 [2024-10-17 19:26:07.648780] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72764 ] 00:23:58.627 [2024-10-17 19:26:07.780706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.627 [2024-10-17 19:26:07.878060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.884 [2024-10-17 19:26:07.952354] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:58.884 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:58.884 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:58.884 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ihBASWRRuq 00:23:59.140 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:59.397 [2024-10-17 19:26:08.651460] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:59.654 nvme0n1 00:23:59.654 19:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:59.654 Running I/O for 1 seconds... 00:24:01.025 3712.00 IOPS, 14.50 MiB/s 00:24:01.025 Latency(us) 00:24:01.025 [2024-10-17T19:26:10.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.025 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:01.025 Verification LBA range: start 0x0 length 0x2000 00:24:01.025 nvme0n1 : 1.02 3760.57 14.69 0.00 0.00 33650.78 11081.54 23950.43 00:24:01.025 [2024-10-17T19:26:10.283Z] =================================================================================================================== 00:24:01.025 [2024-10-17T19:26:10.283Z] Total : 3760.57 14.69 0.00 0.00 33650.78 11081.54 23950.43 00:24:01.025 { 00:24:01.025 "results": [ 00:24:01.025 { 00:24:01.025 "job": "nvme0n1", 00:24:01.025 "core_mask": "0x2", 00:24:01.025 "workload": "verify", 00:24:01.025 "status": "finished", 00:24:01.025 "verify_range": { 00:24:01.025 "start": 0, 00:24:01.025 "length": 8192 00:24:01.025 }, 00:24:01.025 "queue_depth": 128, 00:24:01.025 "io_size": 4096, 00:24:01.025 "runtime": 1.021122, 00:24:01.025 "iops": 3760.569256171153, 00:24:01.025 "mibps": 14.689723656918567, 00:24:01.025 "io_failed": 0, 00:24:01.025 "io_timeout": 0, 00:24:01.025 "avg_latency_us": 33650.78109090909, 00:24:01.025 "min_latency_us": 11081.541818181819, 00:24:01.025 "max_latency_us": 23950.429090909092 00:24:01.025 } 00:24:01.025 ], 00:24:01.025 "core_count": 1 00:24:01.025 } 00:24:01.025 19:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:01.025 19:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.025 19:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.025 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.025 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:01.025 "subsystems": [ 00:24:01.025 { 00:24:01.025 "subsystem": "keyring", 00:24:01.025 "config": [ 00:24:01.025 { 00:24:01.025 "method": "keyring_file_add_key", 00:24:01.025 "params": { 00:24:01.025 "name": "key0", 00:24:01.025 "path": "/tmp/tmp.ihBASWRRuq" 00:24:01.025 } 00:24:01.025 } 00:24:01.025 ] 00:24:01.026 }, 00:24:01.026 { 00:24:01.026 "subsystem": "iobuf", 00:24:01.026 "config": [ 00:24:01.026 { 00:24:01.026 "method": "iobuf_set_options", 00:24:01.026 "params": { 00:24:01.026 "small_pool_count": 8192, 00:24:01.026 "large_pool_count": 1024, 00:24:01.026 "small_bufsize": 8192, 00:24:01.026 "large_bufsize": 135168 00:24:01.026 } 00:24:01.026 } 00:24:01.026 ] 00:24:01.026 }, 00:24:01.026 { 00:24:01.026 "subsystem": "sock", 00:24:01.026 "config": [ 00:24:01.026 { 00:24:01.026 "method": "sock_set_default_impl", 00:24:01.026 "params": { 00:24:01.026 "impl_name": "uring" 00:24:01.026 } 00:24:01.026 }, 00:24:01.026 { 00:24:01.026 "method": "sock_impl_set_options", 00:24:01.026 "params": { 00:24:01.026 "impl_name": "ssl", 00:24:01.026 "recv_buf_size": 4096, 00:24:01.026 "send_buf_size": 4096, 00:24:01.026 "enable_recv_pipe": true, 00:24:01.026 "enable_quickack": false, 00:24:01.026 "enable_placement_id": 0, 00:24:01.026 "enable_zerocopy_send_server": true, 00:24:01.026 "enable_zerocopy_send_client": false, 00:24:01.026 "zerocopy_threshold": 0, 00:24:01.026 "tls_version": 0, 00:24:01.026 "enable_ktls": false 00:24:01.026 } 00:24:01.026 }, 00:24:01.026 { 00:24:01.026 "method": "sock_impl_set_options", 00:24:01.026 "params": { 00:24:01.026 "impl_name": "posix", 00:24:01.026 "recv_buf_size": 2097152, 00:24:01.026 "send_buf_size": 2097152, 00:24:01.026 "enable_recv_pipe": true, 00:24:01.026 "enable_quickack": false, 00:24:01.026 "enable_placement_id": 0, 00:24:01.026 "enable_zerocopy_send_server": true, 00:24:01.026 "enable_zerocopy_send_client": false, 00:24:01.026 "zerocopy_threshold": 0, 00:24:01.026 "tls_version": 0, 00:24:01.026 "enable_ktls": false 00:24:01.026 } 00:24:01.026 }, 00:24:01.026 { 00:24:01.026 "method": "sock_impl_set_options", 00:24:01.026 "params": { 00:24:01.026 "impl_name": "uring", 00:24:01.026 "recv_buf_size": 2097152, 00:24:01.026 "send_buf_size": 2097152, 00:24:01.026 "enable_recv_pipe": true, 00:24:01.026 "enable_quickack": false, 00:24:01.026 "enable_placement_id": 0, 00:24:01.026 "enable_zerocopy_send_server": false, 00:24:01.026 "enable_zerocopy_send_client": false, 00:24:01.026 "zerocopy_threshold": 0, 00:24:01.026 "tls_version": 0, 00:24:01.026 "enable_ktls": false 00:24:01.026 } 00:24:01.026 } 00:24:01.026 ] 00:24:01.026 }, 00:24:01.026 { 00:24:01.026 "subsystem": "vmd", 00:24:01.026 "config": [] 00:24:01.026 }, 00:24:01.026 { 00:24:01.026 "subsystem": "accel", 00:24:01.026 "config": [ 00:24:01.026 { 00:24:01.026 "method": "accel_set_options", 00:24:01.026 "params": { 00:24:01.026 "small_cache_size": 128, 00:24:01.026 "large_cache_size": 16, 00:24:01.026 "task_count": 2048, 00:24:01.026 "sequence_count": 2048, 00:24:01.026 "buf_count": 2048 00:24:01.026 } 00:24:01.026 } 00:24:01.026 ] 00:24:01.026 }, 00:24:01.026 { 00:24:01.026 "subsystem": "bdev", 00:24:01.026 "config": [ 00:24:01.026 { 00:24:01.026 "method": "bdev_set_options", 00:24:01.026 "params": { 00:24:01.026 "bdev_io_pool_size": 65535, 00:24:01.026 "bdev_io_cache_size": 256, 00:24:01.026 "bdev_auto_examine": true, 00:24:01.026 "iobuf_small_cache_size": 128, 00:24:01.026 "iobuf_large_cache_size": 16 00:24:01.026 } 00:24:01.026 }, 00:24:01.026 { 00:24:01.026 "method": "bdev_raid_set_options", 00:24:01.026 "params": { 00:24:01.026 "process_window_size_kb": 1024, 00:24:01.026 "process_max_bandwidth_mb_sec": 0 00:24:01.026 } 00:24:01.026 }, 00:24:01.026 { 00:24:01.026 "method": "bdev_iscsi_set_options", 00:24:01.026 "params": { 00:24:01.026 "timeout_sec": 30 00:24:01.026 } 00:24:01.026 }, 00:24:01.026 { 00:24:01.026 "method": "bdev_nvme_set_options", 00:24:01.026 "params": { 00:24:01.026 "action_on_timeout": "none", 00:24:01.026 "timeout_us": 0, 00:24:01.026 "timeout_admin_us": 0, 00:24:01.026 "keep_alive_timeout_ms": 10000, 00:24:01.026 "arbitration_burst": 0, 00:24:01.026 "low_priority_weight": 0, 00:24:01.026 "medium_priority_weight": 0, 00:24:01.026 "high_priority_weight": 0, 00:24:01.026 "nvme_adminq_poll_period_us": 10000, 00:24:01.026 "nvme_ioq_poll_period_us": 0, 00:24:01.026 "io_queue_requests": 0, 00:24:01.026 "delay_cmd_submit": true, 00:24:01.026 "transport_retry_count": 4, 00:24:01.026 "bdev_retry_count": 3, 00:24:01.026 "transport_ack_timeout": 0, 00:24:01.026 "ctrlr_loss_timeout_sec": 0, 00:24:01.026 "reconnect_delay_sec": 0, 00:24:01.026 "fast_io_fail_timeout_sec": 0, 00:24:01.026 "disable_auto_failback": false, 00:24:01.026 "generate_uuids": false, 00:24:01.026 "transport_tos": 0, 00:24:01.026 "nvme_error_stat": false, 00:24:01.026 "rdma_srq_size": 0, 00:24:01.026 "io_path_stat": false, 00:24:01.026 "allow_accel_sequence": false, 00:24:01.026 "rdma_max_cq_size": 0, 00:24:01.026 "rdma_cm_event_timeout_ms": 0, 00:24:01.026 "dhchap_digests": [ 00:24:01.026 "sha256", 00:24:01.026 "sha384", 00:24:01.026 "sha512" 00:24:01.026 ], 00:24:01.026 "dhchap_dhgroups": [ 00:24:01.026 "null", 00:24:01.026 "ffdhe2048", 00:24:01.026 "ffdhe3072", 00:24:01.026 "ffdhe4096", 00:24:01.026 "ffdhe6144", 00:24:01.026 "ffdhe8192" 00:24:01.026 ] 00:24:01.026 } 00:24:01.026 }, 00:24:01.026 { 00:24:01.026 "method": "bdev_nvme_set_hotplug", 00:24:01.026 "params": { 00:24:01.026 "period_us": 100000, 00:24:01.026 "enable": false 00:24:01.026 } 00:24:01.026 }, 00:24:01.026 { 00:24:01.026 "method": "bdev_malloc_create", 00:24:01.026 "params": { 00:24:01.026 "name": "malloc0", 00:24:01.026 "num_blocks": 8192, 00:24:01.026 "block_size": 4096, 00:24:01.026 "physical_block_size": 4096, 00:24:01.026 "uuid": "5ddf85d5-1682-4c2e-ab05-b5dd8c411c94", 00:24:01.026 "optimal_io_boundary": 0, 00:24:01.026 "md_size": 0, 00:24:01.026 "dif_type": 0, 00:24:01.026 "dif_is_head_of_md": false, 00:24:01.026 "dif_pi_format": 0 00:24:01.026 } 00:24:01.026 }, 00:24:01.026 { 00:24:01.026 "method": "bdev_wait_for_examine" 00:24:01.026 } 00:24:01.026 ] 00:24:01.026 }, 00:24:01.026 { 00:24:01.026 "subsystem": "nbd", 00:24:01.026 "config": [] 00:24:01.026 }, 00:24:01.026 { 00:24:01.026 "subsystem": "scheduler", 00:24:01.026 "config": [ 00:24:01.026 { 00:24:01.026 "method": "framework_set_scheduler", 00:24:01.026 "params": { 00:24:01.026 "name": "static" 00:24:01.026 } 00:24:01.026 } 00:24:01.026 ] 00:24:01.026 }, 00:24:01.026 { 00:24:01.026 "subsystem": "nvmf", 00:24:01.026 "config": [ 00:24:01.026 { 00:24:01.026 "method": "nvmf_set_config", 00:24:01.026 "params": { 00:24:01.026 "discovery_filter": "match_any", 00:24:01.026 "admin_cmd_passthru": { 00:24:01.026 "identify_ctrlr": false 00:24:01.026 }, 00:24:01.026 "dhchap_digests": [ 00:24:01.026 "sha256", 00:24:01.026 "sha384", 00:24:01.026 "sha512" 00:24:01.026 ], 00:24:01.026 "dhchap_dhgroups": [ 00:24:01.026 "null", 00:24:01.026 "ffdhe2048", 00:24:01.026 "ffdhe3072", 00:24:01.026 "ffdhe4096", 00:24:01.026 "ffdhe6144", 00:24:01.026 "ffdhe8192" 00:24:01.026 ] 00:24:01.026 } 00:24:01.026 }, 00:24:01.026 { 00:24:01.026 "method": "nvmf_set_max_subsystems", 00:24:01.026 "params": { 00:24:01.026 "max_subsystems": 1024 00:24:01.026 } 00:24:01.026 }, 00:24:01.026 { 00:24:01.026 "method": "nvmf_set_crdt", 00:24:01.026 "params": { 00:24:01.026 "crdt1": 0, 00:24:01.026 "crdt2": 0, 00:24:01.026 "crdt3": 0 00:24:01.026 } 00:24:01.026 }, 00:24:01.026 { 00:24:01.026 "method": "nvmf_create_transport", 00:24:01.026 "params": { 00:24:01.026 "trtype": "TCP", 00:24:01.026 "max_queue_depth": 128, 00:24:01.026 "max_io_qpairs_per_ctrlr": 127, 00:24:01.026 "in_capsule_data_size": 4096, 00:24:01.026 "max_io_size": 131072, 00:24:01.026 "io_unit_size": 131072, 00:24:01.026 "max_aq_depth": 128, 00:24:01.026 "num_shared_buffers": 511, 00:24:01.026 "buf_cache_size": 4294967295, 00:24:01.026 "dif_insert_or_strip": false, 00:24:01.026 "zcopy": false, 00:24:01.026 "c2h_success": false, 00:24:01.026 "sock_priority": 0, 00:24:01.026 "abort_timeout_sec": 1, 00:24:01.026 "ack_timeout": 0, 00:24:01.026 "data_wr_pool_size": 0 00:24:01.026 } 00:24:01.026 }, 00:24:01.026 { 00:24:01.026 "method": "nvmf_create_subsystem", 00:24:01.026 "params": { 00:24:01.026 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.026 "allow_any_host": false, 00:24:01.026 "serial_number": "00000000000000000000", 00:24:01.026 "model_number": "SPDK bdev Controller", 00:24:01.026 "max_namespaces": 32, 00:24:01.026 "min_cntlid": 1, 00:24:01.026 "max_cntlid": 65519, 00:24:01.026 "ana_reporting": false 00:24:01.026 } 00:24:01.026 }, 00:24:01.026 { 00:24:01.026 "method": "nvmf_subsystem_add_host", 00:24:01.026 "params": { 00:24:01.026 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.026 "host": "nqn.2016-06.io.spdk:host1", 00:24:01.026 "psk": "key0" 00:24:01.026 } 00:24:01.026 }, 00:24:01.026 { 00:24:01.026 "method": "nvmf_subsystem_add_ns", 00:24:01.026 "params": { 00:24:01.026 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.026 "namespace": { 00:24:01.026 "nsid": 1, 00:24:01.026 "bdev_name": "malloc0", 00:24:01.026 "nguid": "5DDF85D516824C2EAB05B5DD8C411C94", 00:24:01.026 "uuid": "5ddf85d5-1682-4c2e-ab05-b5dd8c411c94", 00:24:01.026 "no_auto_visible": false 00:24:01.026 } 00:24:01.026 } 00:24:01.026 }, 00:24:01.026 { 00:24:01.026 "method": "nvmf_subsystem_add_listener", 00:24:01.026 "params": { 00:24:01.026 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.026 "listen_address": { 00:24:01.026 "trtype": "TCP", 00:24:01.026 "adrfam": "IPv4", 00:24:01.026 "traddr": "10.0.0.3", 00:24:01.027 "trsvcid": "4420" 00:24:01.027 }, 00:24:01.027 "secure_channel": false, 00:24:01.027 "sock_impl": "ssl" 00:24:01.027 } 00:24:01.027 } 00:24:01.027 ] 00:24:01.027 } 00:24:01.027 ] 00:24:01.027 }' 00:24:01.027 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:01.285 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:01.285 "subsystems": [ 00:24:01.285 { 00:24:01.285 "subsystem": "keyring", 00:24:01.285 "config": [ 00:24:01.285 { 00:24:01.285 "method": "keyring_file_add_key", 00:24:01.285 "params": { 00:24:01.285 "name": "key0", 00:24:01.285 "path": "/tmp/tmp.ihBASWRRuq" 00:24:01.285 } 00:24:01.285 } 00:24:01.285 ] 00:24:01.285 }, 00:24:01.285 { 00:24:01.285 "subsystem": "iobuf", 00:24:01.285 "config": [ 00:24:01.285 { 00:24:01.285 "method": "iobuf_set_options", 00:24:01.285 "params": { 00:24:01.285 "small_pool_count": 8192, 00:24:01.285 "large_pool_count": 1024, 00:24:01.285 "small_bufsize": 8192, 00:24:01.285 "large_bufsize": 135168 00:24:01.285 } 00:24:01.285 } 00:24:01.285 ] 00:24:01.285 }, 00:24:01.285 { 00:24:01.285 "subsystem": "sock", 00:24:01.285 "config": [ 00:24:01.285 { 00:24:01.285 "method": "sock_set_default_impl", 00:24:01.285 "params": { 00:24:01.285 "impl_name": "uring" 00:24:01.285 } 00:24:01.285 }, 00:24:01.285 { 00:24:01.285 "method": "sock_impl_set_options", 00:24:01.285 "params": { 00:24:01.285 "impl_name": "ssl", 00:24:01.285 "recv_buf_size": 4096, 00:24:01.285 "send_buf_size": 4096, 00:24:01.285 "enable_recv_pipe": true, 00:24:01.285 "enable_quickack": false, 00:24:01.285 "enable_placement_id": 0, 00:24:01.285 "enable_zerocopy_send_server": true, 00:24:01.285 "enable_zerocopy_send_client": false, 00:24:01.285 "zerocopy_threshold": 0, 00:24:01.285 "tls_version": 0, 00:24:01.285 "enable_ktls": false 00:24:01.285 } 00:24:01.285 }, 00:24:01.285 { 00:24:01.285 "method": "sock_impl_set_options", 00:24:01.285 "params": { 00:24:01.285 "impl_name": "posix", 00:24:01.285 "recv_buf_size": 2097152, 00:24:01.285 "send_buf_size": 2097152, 00:24:01.285 "enable_recv_pipe": true, 00:24:01.285 "enable_quickack": false, 00:24:01.285 "enable_placement_id": 0, 00:24:01.285 "enable_zerocopy_send_server": true, 00:24:01.285 "enable_zerocopy_send_client": false, 00:24:01.285 "zerocopy_threshold": 0, 00:24:01.285 "tls_version": 0, 00:24:01.285 "enable_ktls": false 00:24:01.285 } 00:24:01.285 }, 00:24:01.285 { 00:24:01.285 "method": "sock_impl_set_options", 00:24:01.285 "params": { 00:24:01.285 "impl_name": "uring", 00:24:01.285 "recv_buf_size": 2097152, 00:24:01.285 "send_buf_size": 2097152, 00:24:01.285 "enable_recv_pipe": true, 00:24:01.285 "enable_quickack": false, 00:24:01.285 "enable_placement_id": 0, 00:24:01.285 "enable_zerocopy_send_server": false, 00:24:01.285 "enable_zerocopy_send_client": false, 00:24:01.285 "zerocopy_threshold": 0, 00:24:01.285 "tls_version": 0, 00:24:01.285 "enable_ktls": false 00:24:01.285 } 00:24:01.285 } 00:24:01.285 ] 00:24:01.285 }, 00:24:01.285 { 00:24:01.285 "subsystem": "vmd", 00:24:01.285 "config": [] 00:24:01.285 }, 00:24:01.285 { 00:24:01.285 "subsystem": "accel", 00:24:01.285 "config": [ 00:24:01.285 { 00:24:01.285 "method": "accel_set_options", 00:24:01.285 "params": { 00:24:01.285 "small_cache_size": 128, 00:24:01.285 "large_cache_size": 16, 00:24:01.285 "task_count": 2048, 00:24:01.285 "sequence_count": 2048, 00:24:01.285 "buf_count": 2048 00:24:01.285 } 00:24:01.285 } 00:24:01.285 ] 00:24:01.285 }, 00:24:01.285 { 00:24:01.285 "subsystem": "bdev", 00:24:01.285 "config": [ 00:24:01.285 { 00:24:01.285 "method": "bdev_set_options", 00:24:01.285 "params": { 00:24:01.285 "bdev_io_pool_size": 65535, 00:24:01.285 "bdev_io_cache_size": 256, 00:24:01.285 "bdev_auto_examine": true, 00:24:01.285 "iobuf_small_cache_size": 128, 00:24:01.285 "iobuf_large_cache_size": 16 00:24:01.285 } 00:24:01.285 }, 00:24:01.285 { 00:24:01.285 "method": "bdev_raid_set_options", 00:24:01.285 "params": { 00:24:01.285 "process_window_size_kb": 1024, 00:24:01.285 "process_max_bandwidth_mb_sec": 0 00:24:01.285 } 00:24:01.285 }, 00:24:01.285 { 00:24:01.285 "method": "bdev_iscsi_set_options", 00:24:01.285 "params": { 00:24:01.285 "timeout_sec": 30 00:24:01.285 } 00:24:01.285 }, 00:24:01.285 { 00:24:01.285 "method": "bdev_nvme_set_options", 00:24:01.285 "params": { 00:24:01.285 "action_on_timeout": "none", 00:24:01.285 "timeout_us": 0, 00:24:01.285 "timeout_admin_us": 0, 00:24:01.285 "keep_alive_timeout_ms": 10000, 00:24:01.285 "arbitration_burst": 0, 00:24:01.285 "low_priority_weight": 0, 00:24:01.285 "medium_priority_weight": 0, 00:24:01.285 "high_priority_weight": 0, 00:24:01.285 "nvme_adminq_poll_period_us": 10000, 00:24:01.285 "nvme_ioq_poll_period_us": 0, 00:24:01.285 "io_queue_requests": 512, 00:24:01.285 "delay_cmd_submit": true, 00:24:01.285 "transport_retry_count": 4, 00:24:01.285 "bdev_retry_count": 3, 00:24:01.285 "transport_ack_timeout": 0, 00:24:01.285 "ctrlr_loss_timeout_sec": 0, 00:24:01.285 "reconnect_delay_sec": 0, 00:24:01.285 "fast_io_fail_timeout_sec": 0, 00:24:01.285 "disable_auto_failback": false, 00:24:01.285 "generate_uuids": false, 00:24:01.285 "transport_tos": 0, 00:24:01.285 "nvme_error_stat": false, 00:24:01.285 "rdma_srq_size": 0, 00:24:01.285 "io_path_stat": false, 00:24:01.285 "allow_accel_sequence": false, 00:24:01.285 "rdma_max_cq_size": 0, 00:24:01.285 "rdma_cm_event_timeout_ms": 0, 00:24:01.285 "dhchap_digests": [ 00:24:01.285 "sha256", 00:24:01.285 "sha384", 00:24:01.285 "sha512" 00:24:01.285 ], 00:24:01.285 "dhchap_dhgroups": [ 00:24:01.285 "null", 00:24:01.285 "ffdhe2048", 00:24:01.285 "ffdhe3072", 00:24:01.285 "ffdhe4096", 00:24:01.285 "ffdhe6144", 00:24:01.285 "ffdhe8192" 00:24:01.285 ] 00:24:01.285 } 00:24:01.285 }, 00:24:01.285 { 00:24:01.285 "method": "bdev_nvme_attach_controller", 00:24:01.285 "params": { 00:24:01.285 "name": "nvme0", 00:24:01.285 "trtype": "TCP", 00:24:01.285 "adrfam": "IPv4", 00:24:01.285 "traddr": "10.0.0.3", 00:24:01.286 "trsvcid": "4420", 00:24:01.286 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.286 "prchk_reftag": false, 00:24:01.286 "prchk_guard": false, 00:24:01.286 "ctrlr_loss_timeout_sec": 0, 00:24:01.286 "reconnect_delay_sec": 0, 00:24:01.286 "fast_io_fail_timeout_sec": 0, 00:24:01.286 "psk": "key0", 00:24:01.286 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:01.286 "hdgst": false, 00:24:01.286 "ddgst": false, 00:24:01.286 "multipath": "multipath" 00:24:01.286 } 00:24:01.286 }, 00:24:01.286 { 00:24:01.286 "method": "bdev_nvme_set_hotplug", 00:24:01.286 "params": { 00:24:01.286 "period_us": 100000, 00:24:01.286 "enable": false 00:24:01.286 } 00:24:01.286 }, 00:24:01.286 { 00:24:01.286 "method": "bdev_enable_histogram", 00:24:01.286 "params": { 00:24:01.286 "name": "nvme0n1", 00:24:01.286 "enable": true 00:24:01.286 } 00:24:01.286 }, 00:24:01.286 { 00:24:01.286 "method": "bdev_wait_for_examine" 00:24:01.286 } 00:24:01.286 ] 00:24:01.286 }, 00:24:01.286 { 00:24:01.286 "subsystem": "nbd", 00:24:01.286 "config": [] 00:24:01.286 } 00:24:01.286 ] 00:24:01.286 }' 00:24:01.286 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72764 00:24:01.286 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72764 ']' 00:24:01.286 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72764 00:24:01.286 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:01.286 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:01.286 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72764 00:24:01.286 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:01.286 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:01.286 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72764' 00:24:01.286 killing process with pid 72764 00:24:01.286 Received shutdown signal, test time was about 1.000000 seconds 00:24:01.286 00:24:01.286 Latency(us) 00:24:01.286 [2024-10-17T19:26:10.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.286 [2024-10-17T19:26:10.544Z] =================================================================================================================== 00:24:01.286 [2024-10-17T19:26:10.544Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:01.286 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72764 00:24:01.286 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72764 00:24:01.543 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72745 00:24:01.543 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72745 ']' 00:24:01.543 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72745 00:24:01.543 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:01.543 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:01.543 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72745 00:24:01.543 killing process with pid 72745 00:24:01.543 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:01.543 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:01.543 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72745' 00:24:01.543 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72745 00:24:01.543 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72745 00:24:01.802 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:01.802 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:01.802 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:01.802 "subsystems": [ 00:24:01.802 { 00:24:01.802 "subsystem": "keyring", 00:24:01.802 "config": [ 00:24:01.802 { 00:24:01.802 "method": "keyring_file_add_key", 00:24:01.802 "params": { 00:24:01.802 "name": "key0", 00:24:01.802 "path": "/tmp/tmp.ihBASWRRuq" 00:24:01.802 } 00:24:01.802 } 00:24:01.802 ] 00:24:01.802 }, 00:24:01.802 { 00:24:01.802 "subsystem": "iobuf", 00:24:01.802 "config": [ 00:24:01.802 { 00:24:01.802 "method": "iobuf_set_options", 00:24:01.802 "params": { 00:24:01.802 "small_pool_count": 8192, 00:24:01.802 "large_pool_count": 1024, 00:24:01.802 "small_bufsize": 8192, 00:24:01.802 "large_bufsize": 135168 00:24:01.802 } 00:24:01.802 } 00:24:01.802 ] 00:24:01.802 }, 00:24:01.802 { 00:24:01.802 "subsystem": "sock", 00:24:01.802 "config": [ 00:24:01.802 { 00:24:01.802 "method": "sock_set_default_impl", 00:24:01.802 "params": { 00:24:01.802 "impl_name": "uring" 00:24:01.802 } 00:24:01.802 }, 00:24:01.802 { 00:24:01.802 "method": "sock_impl_set_options", 00:24:01.802 "params": { 00:24:01.802 "impl_name": "ssl", 00:24:01.802 "recv_buf_size": 4096, 00:24:01.803 "send_buf_size": 4096, 00:24:01.803 "enable_recv_pipe": true, 00:24:01.803 "enable_quickack": false, 00:24:01.803 "enable_placement_id": 0, 00:24:01.803 "enable_zerocopy_send_server": true, 00:24:01.803 "enable_zerocopy_send_client": false, 00:24:01.803 "zerocopy_threshold": 0, 00:24:01.803 "tls_version": 0, 00:24:01.803 "enable_ktls": false 00:24:01.803 } 00:24:01.803 }, 00:24:01.803 { 00:24:01.803 "method": "sock_impl_set_options", 00:24:01.803 "params": { 00:24:01.803 "impl_name": "posix", 00:24:01.803 "recv_buf_size": 2097152, 00:24:01.803 "send_buf_size": 2097152, 00:24:01.803 "enable_recv_pipe": true, 00:24:01.803 "enable_quickack": false, 00:24:01.803 "enable_placement_id": 0, 00:24:01.803 "enable_zerocopy_send_server": true, 00:24:01.803 "enable_zerocopy_send_client": false, 00:24:01.803 "zerocopy_threshold": 0, 00:24:01.803 "tls_version": 0, 00:24:01.803 "enable_ktls": false 00:24:01.803 } 00:24:01.803 }, 00:24:01.803 { 00:24:01.803 "method": "sock_impl_set_options", 00:24:01.803 "params": { 00:24:01.803 "impl_name": "uring", 00:24:01.803 "recv_buf_size": 2097152, 00:24:01.803 "send_buf_size": 2097152, 00:24:01.803 "enable_recv_pipe": true, 00:24:01.803 "enable_quickack": false, 00:24:01.803 "enable_placement_id": 0, 00:24:01.803 "enable_zerocopy_send_server": false, 00:24:01.803 "enable_zerocopy_send_client": false, 00:24:01.803 "zerocopy_threshold": 0, 00:24:01.803 "tls_version": 0, 00:24:01.803 "enable_ktls": false 00:24:01.803 } 00:24:01.803 } 00:24:01.803 ] 00:24:01.803 }, 00:24:01.803 { 00:24:01.803 "subsystem": "vmd", 00:24:01.803 "config": [] 00:24:01.803 }, 00:24:01.803 { 00:24:01.803 "subsystem": "accel", 00:24:01.803 "config": [ 00:24:01.803 { 00:24:01.803 "method": "accel_set_options", 00:24:01.803 "params": { 00:24:01.803 "small_cache_size": 128, 00:24:01.803 "large_cache_size": 16, 00:24:01.803 "task_count": 2048, 00:24:01.803 "sequence_count": 2048, 00:24:01.803 "buf_count": 2048 00:24:01.803 } 00:24:01.803 } 00:24:01.803 ] 00:24:01.803 }, 00:24:01.803 { 00:24:01.803 "subsystem": "bdev", 00:24:01.803 "config": [ 00:24:01.803 { 00:24:01.803 "method": "bdev_set_options", 00:24:01.803 "params": { 00:24:01.803 "bdev_io_pool_size": 65535, 00:24:01.803 "bdev_io_cache_size": 256, 00:24:01.803 "bdev_auto_examine": true, 00:24:01.803 "iobuf_small_cache_size": 128, 00:24:01.803 "iobuf_large_cache_size": 16 00:24:01.803 } 00:24:01.803 }, 00:24:01.803 { 00:24:01.803 "method": "bdev_raid_set_options", 00:24:01.803 "params": { 00:24:01.803 "process_window_size_kb": 1024, 00:24:01.803 "process_max_bandwidth_mb_sec": 0 00:24:01.803 } 00:24:01.803 }, 00:24:01.803 { 00:24:01.803 "method": "bdev_iscsi_set_options", 00:24:01.803 "params": { 00:24:01.803 "timeout_sec": 30 00:24:01.803 } 00:24:01.803 }, 00:24:01.803 { 00:24:01.803 "method": "bdev_nvme_set_options", 00:24:01.803 "params": { 00:24:01.803 "action_on_timeout": "none", 00:24:01.803 "timeout_us": 0, 00:24:01.803 "timeout_admin_us": 0, 00:24:01.803 "keep_alive_timeout_ms": 10000, 00:24:01.803 "arbitration_burst": 0, 00:24:01.803 "low_priority_weight": 0, 00:24:01.803 "medium_priority_weight": 0, 00:24:01.803 "high_priority_weight": 0, 00:24:01.803 "nvme_adminq_poll_period_us": 10000, 00:24:01.803 "nvme_ioq_poll_period_us": 0, 00:24:01.803 "io_queue_requests": 0, 00:24:01.803 "delay_cmd_submit": true, 00:24:01.803 "transport_retry_count": 4, 00:24:01.803 "bdev_retry_count": 3, 00:24:01.803 "transport_ack_timeout": 0, 00:24:01.803 "ctrlr_loss_timeout_sec": 0, 00:24:01.803 "reconnect_delay_sec": 0, 00:24:01.803 "fast_io_fail_timeout_sec": 0, 00:24:01.803 "disable_auto_failback": false, 00:24:01.803 "generate_uuids": false, 00:24:01.803 "transport_tos": 0, 00:24:01.803 "nvme_error_stat": false, 00:24:01.803 "rdma_srq_size": 0, 00:24:01.803 "io_path_stat": false, 00:24:01.803 "allow_accel_sequence": false, 00:24:01.803 "rdma_max_cq_size": 0, 00:24:01.803 "rdma_cm_event_timeout_ms": 0, 00:24:01.803 "dhchap_digests": [ 00:24:01.803 "sha256", 00:24:01.803 "sha384", 00:24:01.803 "sha512" 00:24:01.803 ], 00:24:01.803 "dhchap_dhgroups": [ 00:24:01.803 "null", 00:24:01.803 "ffdhe2048", 00:24:01.803 "ffdhe3072", 00:24:01.803 "ffdhe4096", 00:24:01.803 "ffdhe6144", 00:24:01.803 "ffdhe8192" 00:24:01.803 ] 00:24:01.803 } 00:24:01.803 }, 00:24:01.803 { 00:24:01.803 "method": "bdev_nvme_set_hotplug", 00:24:01.803 "params": { 00:24:01.803 "period_us": 100000, 00:24:01.803 "enable": false 00:24:01.803 } 00:24:01.803 }, 00:24:01.803 { 00:24:01.803 "method": "bdev_malloc_create", 00:24:01.803 "params": { 00:24:01.803 "name": "malloc0", 00:24:01.803 "num_blocks": 8192, 00:24:01.803 "block_size": 4096, 00:24:01.803 "physical_block_size": 4096, 00:24:01.803 "uuid": "5ddf85d5-1682-4c2e-ab05-b5dd8c411c94", 00:24:01.803 "optimal_io_boundary": 0, 00:24:01.803 "md_size": 0, 00:24:01.803 "dif_type": 0, 00:24:01.803 "dif_is_head_of_md": false, 00:24:01.803 "dif_pi_format": 0 00:24:01.803 } 00:24:01.803 }, 00:24:01.803 { 00:24:01.803 "method": "bdev_wait_for_examine" 00:24:01.803 } 00:24:01.803 ] 00:24:01.803 }, 00:24:01.803 { 00:24:01.803 "subsystem": "nbd", 00:24:01.803 "config": [] 00:24:01.803 }, 00:24:01.803 { 00:24:01.803 "subsystem": "scheduler", 00:24:01.803 "config": [ 00:24:01.803 { 00:24:01.803 "method": "framework_set_scheduler", 00:24:01.803 "params": { 00:24:01.803 "name": "static" 00:24:01.803 } 00:24:01.803 } 00:24:01.803 ] 00:24:01.803 }, 00:24:01.803 { 00:24:01.803 "subsystem": "nvmf", 00:24:01.803 "config": [ 00:24:01.803 { 00:24:01.803 "method": "nvmf_set_config", 00:24:01.803 "params": { 00:24:01.803 "discovery_filter": "match_any", 00:24:01.803 "admin_cmd_passthru": { 00:24:01.803 "identify_ctrlr": false 00:24:01.803 }, 00:24:01.803 "dhchap_digests": [ 00:24:01.803 "sha256", 00:24:01.803 "sha384", 00:24:01.803 "sha512" 00:24:01.803 ], 00:24:01.803 "dhchap_dhgroups": [ 00:24:01.803 "null", 00:24:01.803 "ffdhe2048", 00:24:01.803 "ffdhe3072", 00:24:01.803 "ffdhe4096", 00:24:01.803 "ffdhe6144", 00:24:01.803 "ffdhe8192" 00:24:01.803 ] 00:24:01.803 } 00:24:01.803 }, 00:24:01.803 { 00:24:01.803 "method": "nvmf_set_max_subsystems", 00:24:01.803 "params": { 00:24:01.803 "max_subsystems": 1024 00:24:01.803 } 00:24:01.803 }, 00:24:01.803 { 00:24:01.803 "method": "nvmf_set_crdt", 00:24:01.803 "params": { 00:24:01.803 "crdt1": 0, 00:24:01.803 "crdt2": 0, 00:24:01.803 "crdt3": 0 00:24:01.803 } 00:24:01.803 }, 00:24:01.803 { 00:24:01.803 "method": "nvmf_create_transport", 00:24:01.803 "params": { 00:24:01.803 "trtype": "TCP", 00:24:01.803 "max_queue_depth": 128, 00:24:01.803 "max_io_qpairs_per_ctrlr": 127, 00:24:01.803 "in_capsule_data_size": 4096, 00:24:01.803 "max_io_size": 131072, 00:24:01.803 "io_unit_size": 131072, 00:24:01.803 "max_aq_depth": 128, 00:24:01.803 "num_shared_buffers": 511, 00:24:01.803 "buf_cache_size": 4294967295, 00:24:01.803 "dif_insert_or_strip": false, 00:24:01.803 "zcopy": false, 00:24:01.803 "c2h_success": false, 00:24:01.803 "sock_priority": 0, 00:24:01.804 "abort_timeout_sec": 1, 00:24:01.804 "ack_timeout": 0, 00:24:01.804 "data_wr_pool_size": 0 00:24:01.804 } 00:24:01.804 }, 00:24:01.804 { 00:24:01.804 "method": "nvmf_create_subsystem", 00:24:01.804 "params": { 00:24:01.804 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.804 "allow_any_host": false, 00:24:01.804 "serial_number": "00000000000000000000", 00:24:01.804 "model_number": "SPDK bdev Controller", 00:24:01.804 "max_namespaces": 32, 00:24:01.804 "min_cntlid": 1, 00:24:01.804 "max_cntlid": 65519, 00:24:01.804 "ana_reporting": false 00:24:01.804 } 00:24:01.804 }, 00:24:01.804 { 00:24:01.804 "method": "nvmf_subsystem_add_host", 00:24:01.804 "params": { 00:24:01.804 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.804 "host": "nqn.2016-06.io.spdk:host1", 00:24:01.804 "psk": "key0" 00:24:01.804 } 00:24:01.804 }, 00:24:01.804 { 00:24:01.804 "method": "nvmf_subsystem_add_ns", 00:24:01.804 "params": { 00:24:01.804 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.804 "namespace": { 00:24:01.804 "nsid": 1, 00:24:01.804 "bdev_name": "malloc0", 00:24:01.804 "nguid": "5DDF85D516824C2EAB05B5DD8C411C94", 00:24:01.804 "uuid": "5ddf85d5-1682-4c2e-ab05-b5dd8c411c94", 00:24:01.804 "no_auto_visible": false 00:24:01.804 } 00:24:01.804 } 00:24:01.804 }, 00:24:01.804 { 00:24:01.804 "method": "nvmf_subsystem_add_listener", 00:24:01.804 "params": { 00:24:01.804 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.804 "listen_address": { 00:24:01.804 "trtype": "TCP", 00:24:01.804 "adrfam": "IPv4", 00:24:01.804 "traddr": "10.0.0.3", 00:24:01.804 "trsvcid": "4420" 00:24:01.804 }, 00:24:01.804 "secure_channel": false, 00:24:01.804 "sock_impl": "ssl" 00:24:01.804 } 00:24:01.804 } 00:24:01.804 ] 00:24:01.804 } 00:24:01.804 ] 00:24:01.804 }' 00:24:01.804 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:01.804 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.804 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72823 00:24:01.804 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:01.804 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72823 00:24:01.804 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72823 ']' 00:24:01.804 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.804 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:01.804 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.804 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:01.804 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.061 [2024-10-17 19:26:11.093841] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:24:02.061 [2024-10-17 19:26:11.094269] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.061 [2024-10-17 19:26:11.236345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.061 [2024-10-17 19:26:11.312493] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:02.061 [2024-10-17 19:26:11.312562] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:02.062 [2024-10-17 19:26:11.312574] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:02.062 [2024-10-17 19:26:11.312583] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:02.062 [2024-10-17 19:26:11.312590] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:02.062 [2024-10-17 19:26:11.313089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.320 [2024-10-17 19:26:11.500960] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:02.578 [2024-10-17 19:26:11.596496] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.578 [2024-10-17 19:26:11.628421] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:02.578 [2024-10-17 19:26:11.628669] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:03.144 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:03.144 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:03.144 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:03.144 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:03.144 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.144 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.144 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72855 00:24:03.144 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72855 /var/tmp/bdevperf.sock 00:24:03.144 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72855 ']' 00:24:03.144 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:03.144 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:03.144 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:03.144 "subsystems": [ 00:24:03.144 { 00:24:03.144 "subsystem": "keyring", 00:24:03.144 "config": [ 00:24:03.144 { 00:24:03.144 "method": "keyring_file_add_key", 00:24:03.144 "params": { 00:24:03.144 "name": "key0", 00:24:03.144 "path": "/tmp/tmp.ihBASWRRuq" 00:24:03.144 } 00:24:03.144 } 00:24:03.144 ] 00:24:03.144 }, 00:24:03.144 { 00:24:03.144 "subsystem": "iobuf", 00:24:03.144 "config": [ 00:24:03.144 { 00:24:03.144 "method": "iobuf_set_options", 00:24:03.144 "params": { 00:24:03.144 "small_pool_count": 8192, 00:24:03.144 "large_pool_count": 1024, 00:24:03.144 "small_bufsize": 8192, 00:24:03.144 "large_bufsize": 135168 00:24:03.144 } 00:24:03.144 } 00:24:03.144 ] 00:24:03.144 }, 00:24:03.144 { 00:24:03.144 "subsystem": "sock", 00:24:03.144 "config": [ 00:24:03.144 { 00:24:03.144 "method": "sock_set_default_impl", 00:24:03.144 "params": { 00:24:03.144 "impl_name": "uring" 00:24:03.144 } 00:24:03.144 }, 00:24:03.144 { 00:24:03.144 "method": "sock_impl_set_options", 00:24:03.144 "params": { 00:24:03.144 "impl_name": "ssl", 00:24:03.144 "recv_buf_size": 4096, 00:24:03.144 "send_buf_size": 4096, 00:24:03.144 "enable_recv_pipe": true, 00:24:03.144 "enable_quickack": false, 00:24:03.144 "enable_placement_id": 0, 00:24:03.144 "enable_zerocopy_send_server": true, 00:24:03.144 "enable_zerocopy_send_client": false, 00:24:03.144 "zerocopy_threshold": 0, 00:24:03.144 "tls_version": 0, 00:24:03.144 "enable_ktls": false 00:24:03.144 } 00:24:03.144 }, 00:24:03.144 { 00:24:03.144 "method": "sock_impl_set_options", 00:24:03.144 "params": { 00:24:03.144 "impl_name": "posix", 00:24:03.144 "recv_buf_size": 2097152, 00:24:03.144 "send_buf_size": 2097152, 00:24:03.144 "enable_recv_pipe": true, 00:24:03.144 "enable_quickack": false, 00:24:03.144 "enable_placement_id": 0, 00:24:03.144 "enable_zerocopy_send_server": true, 00:24:03.144 "enable_zerocopy_send_client": false, 00:24:03.144 "zerocopy_threshold": 0, 00:24:03.144 "tls_version": 0, 00:24:03.145 "enable_ktls": false 00:24:03.145 } 00:24:03.145 }, 00:24:03.145 { 00:24:03.145 "method": "sock_impl_set_options", 00:24:03.145 "params": { 00:24:03.145 "impl_name": "uring", 00:24:03.145 "recv_buf_size": 2097152, 00:24:03.145 "send_buf_size": 2097152, 00:24:03.145 "enable_recv_pipe": true, 00:24:03.145 "enable_quickack": false, 00:24:03.145 "enable_placement_id": 0, 00:24:03.145 "enable_zerocopy_send_server": false, 00:24:03.145 "enable_zerocopy_send_client": false, 00:24:03.145 "zerocopy_threshold": 0, 00:24:03.145 "tls_version": 0, 00:24:03.145 "enable_ktls": false 00:24:03.145 } 00:24:03.145 } 00:24:03.145 ] 00:24:03.145 }, 00:24:03.145 { 00:24:03.145 "subsystem": "vmd", 00:24:03.145 "config": [] 00:24:03.145 }, 00:24:03.145 { 00:24:03.145 "subsystem": "accel", 00:24:03.145 "config": [ 00:24:03.145 { 00:24:03.145 "method": "accel_set_options", 00:24:03.145 "params": { 00:24:03.145 "small_cache_size": 128, 00:24:03.145 "large_cache_size": 16, 00:24:03.145 "task_count": 2048, 00:24:03.145 "sequence_count": 2048, 00:24:03.145 "buf_count": 2048 00:24:03.145 } 00:24:03.145 } 00:24:03.145 ] 00:24:03.145 }, 00:24:03.145 { 00:24:03.145 "subsystem": "bdev", 00:24:03.145 "config": [ 00:24:03.145 { 00:24:03.145 "method": "bdev_set_options", 00:24:03.145 "params": { 00:24:03.145 "bdev_io_pool_size": 65535, 00:24:03.145 "bdev_io_cache_size": 256, 00:24:03.145 "bdev_auto_examine": true, 00:24:03.145 "iobuf_small_cache_size": 128, 00:24:03.145 "iobuf_large_cache_size": 16 00:24:03.145 } 00:24:03.145 }, 00:24:03.145 { 00:24:03.145 "method": "bdev_raid_set_options", 00:24:03.145 "params": { 00:24:03.145 "process_window_size_kb": 1024, 00:24:03.145 "process_max_bandwidth_mb_sec": 0 00:24:03.145 } 00:24:03.145 }, 00:24:03.145 { 00:24:03.145 "method": "bdev_iscsi_set_options", 00:24:03.145 "params": { 00:24:03.145 "timeout_sec": 30 00:24:03.145 } 00:24:03.145 }, 00:24:03.145 { 00:24:03.145 "method": "bdev_nvme_set_options", 00:24:03.145 "params": { 00:24:03.145 "action_on_timeout": "none", 00:24:03.145 "timeout_us": 0, 00:24:03.145 "timeout_admin_us": 0, 00:24:03.145 "keep_alive_timeout_ms": 10000, 00:24:03.145 "arbitration_burst": 0, 00:24:03.145 "low_priority_weight": 0, 00:24:03.145 "medium_priority_weight": 0, 00:24:03.145 "high_priority_weight": 0, 00:24:03.145 "nvme_adminq_poll_period_us": 10000, 00:24:03.145 "nvme_ioq_poll_period_us": 0, 00:24:03.145 "io_queue_requests": 512, 00:24:03.145 "delay_cmd_submit": true, 00:24:03.145 "transport_retry_count": 4, 00:24:03.145 "bdev_retry_count": 3, 00:24:03.145 "transport_ack_timeout": 0, 00:24:03.145 "ctrlr_loss_timeout_sec": 0, 00:24:03.145 "reconnect_delay_sec": 0, 00:24:03.145 "fast_io_fail_timeout_sec": 0, 00:24:03.145 "disable_auto_failback": false, 00:24:03.145 "generate_uuids": false, 00:24:03.145 "transport_tos": 0, 00:24:03.145 "nvme_error_stat": false, 00:24:03.145 "rdma_srq_size": 0, 00:24:03.145 "io_path_stat": false, 00:24:03.145 "allow_accel_sequence": false, 00:24:03.145 "rdma_max_cq_size": 0, 00:24:03.145 "rdma_cm_event_timeout_ms": 0, 00:24:03.145 "dhchap_digests": [ 00:24:03.145 "sha256", 00:24:03.145 "sha384", 00:24:03.145 "sha512" 00:24:03.145 ], 00:24:03.145 "dhchap_dhgroups": [ 00:24:03.145 "null", 00:24:03.145 "ffdhe2048", 00:24:03.145 "ffdhe3072", 00:24:03.145 "ffdhe4096", 00:24:03.145 "ffdhe6144", 00:24:03.145 "ffdhe8192" 00:24:03.145 ] 00:24:03.145 } 00:24:03.145 }, 00:24:03.145 { 00:24:03.145 "method": "bdev_nvme_attach_controller", 00:24:03.145 "params": { 00:24:03.145 "name": "nvme0", 00:24:03.145 "trtype": "TCP", 00:24:03.145 "adrfam": "IPv4", 00:24:03.145 "traddr": "10.0.0.3", 00:24:03.145 "trsvcid": "4420", 00:24:03.145 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.145 "prchk_reftag": false, 00:24:03.145 "prchk_guard": false, 00:24:03.145 "ctrlr_loss_timeout_sec": 0, 00:24:03.145 "reconnect_delay_sec": 0, 00:24:03.145 "fast_io_fail_timeout_sec": 0, 00:24:03.145 "psk": "key0", 00:24:03.145 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:03.145 "hdgst": false, 00:24:03.145 "ddgst": false, 00:24:03.145 "multipath": "multipath" 00:24:03.145 } 00:24:03.145 }, 00:24:03.145 { 00:24:03.145 "method": "bdev_nvme_set_hotplug", 00:24:03.145 "params": { 00:24:03.145 "period_us": 100000, 00:24:03.145 "enable": false 00:24:03.145 } 00:24:03.145 }, 00:24:03.145 { 00:24:03.145 "method": "bdev_enable_histogram", 00:24:03.145 "params": { 00:24:03.145 "name": "nvme0n1", 00:24:03.145 "enable": true 00:24:03.145 } 00:24:03.145 }, 00:24:03.145 { 00:24:03.145 "method": "bdev_wait_for_examine" 00:24:03.145 } 00:24:03.145 ] 00:24:03.145 }, 00:24:03.145 { 00:24:03.145 "subsystem": "nbd", 00:24:03.145 "config": [] 00:24:03.145 } 00:24:03.145 ] 00:24:03.145 }' 00:24:03.145 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:03.145 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:03.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:03.145 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:03.145 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.145 [2024-10-17 19:26:12.278658] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:24:03.145 [2024-10-17 19:26:12.278976] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72855 ] 00:24:03.403 [2024-10-17 19:26:12.415777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.403 [2024-10-17 19:26:12.484674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.403 [2024-10-17 19:26:12.623504] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:03.659 [2024-10-17 19:26:12.676017] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:04.223 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:04.223 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:04.223 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:04.223 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:04.481 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.481 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:04.738 Running I/O for 1 seconds... 00:24:05.671 3968.00 IOPS, 15.50 MiB/s 00:24:05.671 Latency(us) 00:24:05.671 [2024-10-17T19:26:14.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.671 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:05.671 Verification LBA range: start 0x0 length 0x2000 00:24:05.671 nvme0n1 : 1.03 3979.04 15.54 0.00 0.00 31816.77 7566.43 21567.30 00:24:05.671 [2024-10-17T19:26:14.929Z] =================================================================================================================== 00:24:05.671 [2024-10-17T19:26:14.929Z] Total : 3979.04 15.54 0.00 0.00 31816.77 7566.43 21567.30 00:24:05.671 { 00:24:05.671 "results": [ 00:24:05.671 { 00:24:05.671 "job": "nvme0n1", 00:24:05.671 "core_mask": "0x2", 00:24:05.671 "workload": "verify", 00:24:05.671 "status": "finished", 00:24:05.671 "verify_range": { 00:24:05.671 "start": 0, 00:24:05.671 "length": 8192 00:24:05.671 }, 00:24:05.671 "queue_depth": 128, 00:24:05.671 "io_size": 4096, 00:24:05.671 "runtime": 1.029393, 00:24:05.671 "iops": 3979.0439608584866, 00:24:05.671 "mibps": 15.543140472103463, 00:24:05.671 "io_failed": 0, 00:24:05.671 "io_timeout": 0, 00:24:05.671 "avg_latency_us": 31816.774545454544, 00:24:05.671 "min_latency_us": 7566.4290909090905, 00:24:05.671 "max_latency_us": 21567.30181818182 00:24:05.671 } 00:24:05.671 ], 00:24:05.671 "core_count": 1 00:24:05.671 } 00:24:05.671 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:05.671 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:05.671 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:05.671 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:24:05.671 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:24:05.671 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:05.671 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:05.671 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:05.671 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:05.671 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:05.671 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:05.671 nvmf_trace.0 00:24:05.997 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:24:05.997 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72855 00:24:05.997 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72855 ']' 00:24:05.997 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72855 00:24:05.997 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:05.997 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:05.997 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72855 00:24:05.997 killing process with pid 72855 00:24:05.997 Received shutdown signal, test time was about 1.000000 seconds 00:24:05.997 00:24:05.997 Latency(us) 00:24:05.997 [2024-10-17T19:26:15.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.997 [2024-10-17T19:26:15.255Z] =================================================================================================================== 00:24:05.997 [2024-10-17T19:26:15.255Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:05.997 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:05.997 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:05.997 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72855' 00:24:05.997 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72855 00:24:05.997 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72855 00:24:05.997 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:05.997 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:05.997 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:05.997 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:05.997 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:05.997 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:05.997 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:05.997 rmmod nvme_tcp 00:24:05.997 rmmod nvme_fabrics 00:24:06.255 rmmod nvme_keyring 00:24:06.255 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:06.255 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:06.255 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:06.255 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 72823 ']' 00:24:06.255 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 72823 00:24:06.255 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72823 ']' 00:24:06.255 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72823 00:24:06.255 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:06.256 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:06.256 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72823 00:24:06.256 killing process with pid 72823 00:24:06.256 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:06.256 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:06.256 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72823' 00:24:06.256 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72823 00:24:06.256 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72823 00:24:06.256 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:06.256 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:06.256 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:06.256 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:06.256 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:24:06.256 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:06.256 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:24:06.514 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:06.514 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:06.514 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:06.514 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:06.514 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:06.514 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:06.514 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:06.514 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:06.514 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:06.514 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:06.514 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:06.514 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:06.514 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:06.514 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:06.514 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:06.514 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:06.514 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.514 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.514 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.772 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:24:06.772 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.ijKYluEEMl /tmp/tmp.37VWGfgBmA /tmp/tmp.ihBASWRRuq 00:24:06.772 00:24:06.772 real 1m29.874s 00:24:06.772 user 2m27.373s 00:24:06.772 sys 0m29.374s 00:24:06.772 ************************************ 00:24:06.772 END TEST nvmf_tls 00:24:06.772 ************************************ 00:24:06.772 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:06.772 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.772 19:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:06.772 19:26:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:06.772 19:26:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:06.772 19:26:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:06.772 ************************************ 00:24:06.772 START TEST nvmf_fips 00:24:06.772 ************************************ 00:24:06.772 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:06.772 * Looking for test storage... 00:24:06.772 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:24:06.772 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:06.772 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:24:06.772 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:06.772 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:06.772 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:06.772 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:06.772 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:06.772 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:06.772 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:06.772 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:06.772 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:06.772 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:06.772 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:06.772 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:06.772 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:06.772 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:06.772 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:06.772 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:06.772 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.772 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:06.772 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:06.772 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:06.772 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:06.772 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:07.031 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:07.031 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:07.031 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:07.031 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:07.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.032 --rc genhtml_branch_coverage=1 00:24:07.032 --rc genhtml_function_coverage=1 00:24:07.032 --rc genhtml_legend=1 00:24:07.032 --rc geninfo_all_blocks=1 00:24:07.032 --rc geninfo_unexecuted_blocks=1 00:24:07.032 00:24:07.032 ' 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:07.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.032 --rc genhtml_branch_coverage=1 00:24:07.032 --rc genhtml_function_coverage=1 00:24:07.032 --rc genhtml_legend=1 00:24:07.032 --rc geninfo_all_blocks=1 00:24:07.032 --rc geninfo_unexecuted_blocks=1 00:24:07.032 00:24:07.032 ' 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:07.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.032 --rc genhtml_branch_coverage=1 00:24:07.032 --rc genhtml_function_coverage=1 00:24:07.032 --rc genhtml_legend=1 00:24:07.032 --rc geninfo_all_blocks=1 00:24:07.032 --rc geninfo_unexecuted_blocks=1 00:24:07.032 00:24:07.032 ' 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:07.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.032 --rc genhtml_branch_coverage=1 00:24:07.032 --rc genhtml_function_coverage=1 00:24:07.032 --rc genhtml_legend=1 00:24:07.032 --rc geninfo_all_blocks=1 00:24:07.032 --rc geninfo_unexecuted_blocks=1 00:24:07.032 00:24:07.032 ' 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:07.032 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:07.032 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:24:07.033 Error setting digest 00:24:07.033 40B294744B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:07.033 40B294744B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@458 -- # nvmf_veth_init 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:07.033 Cannot find device "nvmf_init_br" 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:07.033 Cannot find device "nvmf_init_br2" 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:24:07.033 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:07.291 Cannot find device "nvmf_tgt_br" 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:07.291 Cannot find device "nvmf_tgt_br2" 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:07.291 Cannot find device "nvmf_init_br" 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:07.291 Cannot find device "nvmf_init_br2" 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:07.291 Cannot find device "nvmf_tgt_br" 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:07.291 Cannot find device "nvmf_tgt_br2" 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:07.291 Cannot find device "nvmf_br" 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:07.291 Cannot find device "nvmf_init_if" 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:07.291 Cannot find device "nvmf_init_if2" 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:07.291 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:07.291 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:07.291 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:07.292 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:07.292 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:07.292 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:07.550 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:07.550 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:24:07.550 00:24:07.550 --- 10.0.0.3 ping statistics --- 00:24:07.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.550 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:07.550 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:07.550 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 00:24:07.550 00:24:07.550 --- 10.0.0.4 ping statistics --- 00:24:07.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.550 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:07.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:07.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:24:07.550 00:24:07.550 --- 10.0.0.1 ping statistics --- 00:24:07.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.550 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:07.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:07.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:24:07.550 00:24:07.550 --- 10.0.0.2 ping statistics --- 00:24:07.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.550 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # return 0 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=73172 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 73172 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 73172 ']' 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:07.550 19:26:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:07.550 [2024-10-17 19:26:16.766690] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:24:07.550 [2024-10-17 19:26:16.767060] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.808 [2024-10-17 19:26:16.905904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.808 [2024-10-17 19:26:16.975263] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.808 [2024-10-17 19:26:16.975356] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.808 [2024-10-17 19:26:16.975372] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:07.808 [2024-10-17 19:26:16.975384] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:07.808 [2024-10-17 19:26:16.975393] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.808 [2024-10-17 19:26:16.975914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.808 [2024-10-17 19:26:17.033385] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:08.067 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:08.067 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:08.067 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:08.067 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:08.067 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:08.067 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.067 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:08.067 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:08.067 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:08.067 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Euk 00:24:08.067 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:08.067 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Euk 00:24:08.067 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Euk 00:24:08.067 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Euk 00:24:08.067 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:08.325 [2024-10-17 19:26:17.465280] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.325 [2024-10-17 19:26:17.481199] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:08.325 [2024-10-17 19:26:17.481590] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:08.325 malloc0 00:24:08.325 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:08.325 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=73206 00:24:08.325 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:08.325 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 73206 /var/tmp/bdevperf.sock 00:24:08.325 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 73206 ']' 00:24:08.325 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:08.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:08.325 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:08.325 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:08.325 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:08.325 19:26:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:08.584 [2024-10-17 19:26:17.638525] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:24:08.584 [2024-10-17 19:26:17.639029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73206 ] 00:24:08.584 [2024-10-17 19:26:17.778390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.842 [2024-10-17 19:26:17.871195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:08.842 [2024-10-17 19:26:17.928237] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:09.775 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:09.775 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:09.775 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Euk 00:24:09.775 19:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:10.033 [2024-10-17 19:26:19.279793] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:10.290 TLSTESTn1 00:24:10.290 19:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:10.290 Running I/O for 10 seconds... 00:24:12.596 3951.00 IOPS, 15.43 MiB/s [2024-10-17T19:26:22.789Z] 3967.50 IOPS, 15.50 MiB/s [2024-10-17T19:26:23.723Z] 3970.67 IOPS, 15.51 MiB/s [2024-10-17T19:26:24.658Z] 3768.50 IOPS, 14.72 MiB/s [2024-10-17T19:26:25.591Z] 3635.20 IOPS, 14.20 MiB/s [2024-10-17T19:26:26.524Z] 3544.67 IOPS, 13.85 MiB/s [2024-10-17T19:26:27.898Z] 3474.29 IOPS, 13.57 MiB/s [2024-10-17T19:26:28.833Z] 3429.00 IOPS, 13.39 MiB/s [2024-10-17T19:26:29.767Z] 3425.67 IOPS, 13.38 MiB/s [2024-10-17T19:26:29.767Z] 3409.20 IOPS, 13.32 MiB/s 00:24:20.509 Latency(us) 00:24:20.509 [2024-10-17T19:26:29.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.509 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:20.509 Verification LBA range: start 0x0 length 0x2000 00:24:20.509 TLSTESTn1 : 10.03 3412.79 13.33 0.00 0.00 37417.61 5898.24 35508.60 00:24:20.509 [2024-10-17T19:26:29.767Z] =================================================================================================================== 00:24:20.509 [2024-10-17T19:26:29.767Z] Total : 3412.79 13.33 0.00 0.00 37417.61 5898.24 35508.60 00:24:20.509 { 00:24:20.509 "results": [ 00:24:20.509 { 00:24:20.509 "job": "TLSTESTn1", 00:24:20.509 "core_mask": "0x4", 00:24:20.509 "workload": "verify", 00:24:20.509 "status": "finished", 00:24:20.509 "verify_range": { 00:24:20.509 "start": 0, 00:24:20.509 "length": 8192 00:24:20.509 }, 00:24:20.509 "queue_depth": 128, 00:24:20.509 "io_size": 4096, 00:24:20.509 "runtime": 10.026393, 00:24:20.509 "iops": 3412.79261644741, 00:24:20.509 "mibps": 13.331221157997696, 00:24:20.509 "io_failed": 0, 00:24:20.509 "io_timeout": 0, 00:24:20.509 "avg_latency_us": 37417.60889951594, 00:24:20.509 "min_latency_us": 5898.24, 00:24:20.509 "max_latency_us": 35508.59636363637 00:24:20.509 } 00:24:20.509 ], 00:24:20.509 "core_count": 1 00:24:20.509 } 00:24:20.509 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:20.509 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:20.509 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:24:20.509 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:24:20.509 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:20.509 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:20.509 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:20.509 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:20.509 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:20.509 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:20.509 nvmf_trace.0 00:24:20.509 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:24:20.509 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73206 00:24:20.509 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 73206 ']' 00:24:20.509 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 73206 00:24:20.509 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:24:20.509 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:20.509 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73206 00:24:20.509 killing process with pid 73206 00:24:20.509 Received shutdown signal, test time was about 10.000000 seconds 00:24:20.509 00:24:20.509 Latency(us) 00:24:20.509 [2024-10-17T19:26:29.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.509 [2024-10-17T19:26:29.767Z] =================================================================================================================== 00:24:20.509 [2024-10-17T19:26:29.767Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:20.509 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:20.509 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:20.509 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73206' 00:24:20.509 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 73206 00:24:20.509 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 73206 00:24:20.768 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:20.768 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:20.768 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:20.768 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:20.768 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:20.768 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:20.768 19:26:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:20.768 rmmod nvme_tcp 00:24:20.768 rmmod nvme_fabrics 00:24:20.768 rmmod nvme_keyring 00:24:20.768 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:20.768 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:20.768 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:20.768 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 73172 ']' 00:24:20.768 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 73172 00:24:20.768 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 73172 ']' 00:24:20.768 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 73172 00:24:20.768 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:24:21.026 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:21.026 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73172 00:24:21.026 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:21.026 killing process with pid 73172 00:24:21.026 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:21.026 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73172' 00:24:21.026 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 73172 00:24:21.026 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 73172 00:24:21.284 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:21.284 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:21.284 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:21.284 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:21.284 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:24:21.284 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:21.284 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:24:21.284 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:21.284 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:21.285 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:21.285 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:21.285 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:21.285 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:21.285 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:21.285 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:21.285 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:21.285 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:21.285 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:21.285 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:21.285 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:21.285 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:21.542 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:21.542 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:21.542 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.542 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.542 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.542 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:24:21.543 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Euk 00:24:21.543 ************************************ 00:24:21.543 END TEST nvmf_fips 00:24:21.543 ************************************ 00:24:21.543 00:24:21.543 real 0m14.776s 00:24:21.543 user 0m20.479s 00:24:21.543 sys 0m6.077s 00:24:21.543 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:21.543 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:21.543 19:26:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:21.543 19:26:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:21.543 19:26:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:21.543 19:26:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:21.543 ************************************ 00:24:21.543 START TEST nvmf_control_msg_list 00:24:21.543 ************************************ 00:24:21.543 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:21.543 * Looking for test storage... 00:24:21.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:21.543 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:21.543 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:24:21.543 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:21.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.802 --rc genhtml_branch_coverage=1 00:24:21.802 --rc genhtml_function_coverage=1 00:24:21.802 --rc genhtml_legend=1 00:24:21.802 --rc geninfo_all_blocks=1 00:24:21.802 --rc geninfo_unexecuted_blocks=1 00:24:21.802 00:24:21.802 ' 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:21.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.802 --rc genhtml_branch_coverage=1 00:24:21.802 --rc genhtml_function_coverage=1 00:24:21.802 --rc genhtml_legend=1 00:24:21.802 --rc geninfo_all_blocks=1 00:24:21.802 --rc geninfo_unexecuted_blocks=1 00:24:21.802 00:24:21.802 ' 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:21.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.802 --rc genhtml_branch_coverage=1 00:24:21.802 --rc genhtml_function_coverage=1 00:24:21.802 --rc genhtml_legend=1 00:24:21.802 --rc geninfo_all_blocks=1 00:24:21.802 --rc geninfo_unexecuted_blocks=1 00:24:21.802 00:24:21.802 ' 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:21.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.802 --rc genhtml_branch_coverage=1 00:24:21.802 --rc genhtml_function_coverage=1 00:24:21.802 --rc genhtml_legend=1 00:24:21.802 --rc geninfo_all_blocks=1 00:24:21.802 --rc geninfo_unexecuted_blocks=1 00:24:21.802 00:24:21.802 ' 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:21.802 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:21.803 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@458 -- # nvmf_veth_init 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:21.803 Cannot find device "nvmf_init_br" 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:21.803 Cannot find device "nvmf_init_br2" 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:21.803 Cannot find device "nvmf_tgt_br" 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:21.803 Cannot find device "nvmf_tgt_br2" 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:21.803 Cannot find device "nvmf_init_br" 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:21.803 Cannot find device "nvmf_init_br2" 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:21.803 Cannot find device "nvmf_tgt_br" 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:24:21.803 19:26:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:21.803 Cannot find device "nvmf_tgt_br2" 00:24:21.803 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:24:21.803 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:21.803 Cannot find device "nvmf_br" 00:24:21.803 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:24:21.803 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:21.803 Cannot find device "nvmf_init_if" 00:24:21.803 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:24:21.803 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:21.803 Cannot find device "nvmf_init_if2" 00:24:21.803 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:24:21.803 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:21.803 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:21.803 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:24:21.803 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:21.803 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:21.803 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:24:21.803 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:22.062 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:22.062 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:24:22.062 00:24:22.062 --- 10.0.0.3 ping statistics --- 00:24:22.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.062 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:22.062 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:22.062 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:24:22.062 00:24:22.062 --- 10.0.0.4 ping statistics --- 00:24:22.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.062 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:22.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:22.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:24:22.062 00:24:22.062 --- 10.0.0.1 ping statistics --- 00:24:22.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.062 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:22.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:22.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:24:22.062 00:24:22.062 --- 10.0.0.2 ping statistics --- 00:24:22.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.062 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # return 0 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:22.062 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:22.063 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=73615 00:24:22.063 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:22.063 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 73615 00:24:22.063 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 73615 ']' 00:24:22.063 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.063 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:22.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.063 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.063 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:22.063 19:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:22.320 [2024-10-17 19:26:31.366092] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:24:22.320 [2024-10-17 19:26:31.366388] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.320 [2024-10-17 19:26:31.505347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.577 [2024-10-17 19:26:31.589213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.577 [2024-10-17 19:26:31.589325] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.577 [2024-10-17 19:26:31.589356] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.577 [2024-10-17 19:26:31.589383] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.577 [2024-10-17 19:26:31.589395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.577 [2024-10-17 19:26:31.589997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.577 [2024-10-17 19:26:31.656220] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:23.509 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:23.509 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:24:23.509 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:23.509 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:23.509 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:23.509 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.509 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:23.509 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:24:23.509 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:23.509 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.509 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:23.509 [2024-10-17 19:26:32.456671] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.509 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.509 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:23.509 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.509 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:23.509 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.509 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:23.509 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.509 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:23.509 Malloc0 00:24:23.509 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.509 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:23.509 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.509 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:23.509 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.509 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:23.510 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.510 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:23.510 [2024-10-17 19:26:32.505463] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:23.510 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.510 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73647 00:24:23.510 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:24:23.510 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73648 00:24:23.510 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:24:23.510 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73649 00:24:23.510 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:24:23.510 19:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73647 00:24:23.510 [2024-10-17 19:26:32.689978] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:23.510 [2024-10-17 19:26:32.690513] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:23.510 [2024-10-17 19:26:32.699943] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:24.889 Initializing NVMe Controllers 00:24:24.889 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:24:24.889 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:24.889 Initialization complete. Launching workers. 00:24:24.889 ======================================================== 00:24:24.889 Latency(us) 00:24:24.889 Device Information : IOPS MiB/s Average min max 00:24:24.889 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3279.00 12.81 304.65 179.15 558.03 00:24:24.889 ======================================================== 00:24:24.889 Total : 3279.00 12.81 304.65 179.15 558.03 00:24:24.889 00:24:24.889 Initializing NVMe Controllers 00:24:24.889 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:24:24.889 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:24.889 Initialization complete. Launching workers. 00:24:24.889 ======================================================== 00:24:24.889 Latency(us) 00:24:24.889 Device Information : IOPS MiB/s Average min max 00:24:24.889 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3262.99 12.75 306.14 208.88 1124.11 00:24:24.889 ======================================================== 00:24:24.890 Total : 3262.99 12.75 306.14 208.88 1124.11 00:24:24.890 00:24:24.890 Initializing NVMe Controllers 00:24:24.890 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:24:24.890 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:24.890 Initialization complete. Launching workers. 00:24:24.890 ======================================================== 00:24:24.890 Latency(us) 00:24:24.890 Device Information : IOPS MiB/s Average min max 00:24:24.890 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3291.97 12.86 303.37 126.46 796.71 00:24:24.890 ======================================================== 00:24:24.890 Total : 3291.97 12.86 303.37 126.46 796.71 00:24:24.890 00:24:24.890 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73648 00:24:24.890 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73649 00:24:24.890 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:24.890 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:24.890 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:24.890 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:24.890 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:24.890 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:24.890 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:24.890 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:24.890 rmmod nvme_tcp 00:24:24.890 rmmod nvme_fabrics 00:24:24.890 rmmod nvme_keyring 00:24:24.890 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:24.890 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:24.890 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:24.890 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 73615 ']' 00:24:24.890 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 73615 00:24:24.890 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 73615 ']' 00:24:24.890 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 73615 00:24:24.890 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:24:24.890 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:24.890 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73615 00:24:24.890 killing process with pid 73615 00:24:24.890 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:24.890 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:24.890 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73615' 00:24:24.890 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 73615 00:24:24.890 19:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 73615 00:24:25.148 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:25.148 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:25.148 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:25.148 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:25.148 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:24:25.148 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:25.148 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:24:25.148 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:25.148 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:25.148 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:25.148 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:25.148 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:25.148 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:25.148 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:25.148 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:25.148 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:25.148 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:25.148 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:25.148 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:25.148 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:25.148 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:25.148 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:25.148 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:25.148 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.148 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:25.148 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.148 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:24:25.148 00:24:25.148 real 0m3.721s 00:24:25.148 user 0m5.654s 00:24:25.148 sys 0m1.541s 00:24:25.148 ************************************ 00:24:25.148 END TEST nvmf_control_msg_list 00:24:25.148 ************************************ 00:24:25.148 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:25.148 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:25.407 ************************************ 00:24:25.407 START TEST nvmf_wait_for_buf 00:24:25.407 ************************************ 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:25.407 * Looking for test storage... 00:24:25.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:25.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.407 --rc genhtml_branch_coverage=1 00:24:25.407 --rc genhtml_function_coverage=1 00:24:25.407 --rc genhtml_legend=1 00:24:25.407 --rc geninfo_all_blocks=1 00:24:25.407 --rc geninfo_unexecuted_blocks=1 00:24:25.407 00:24:25.407 ' 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:25.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.407 --rc genhtml_branch_coverage=1 00:24:25.407 --rc genhtml_function_coverage=1 00:24:25.407 --rc genhtml_legend=1 00:24:25.407 --rc geninfo_all_blocks=1 00:24:25.407 --rc geninfo_unexecuted_blocks=1 00:24:25.407 00:24:25.407 ' 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:25.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.407 --rc genhtml_branch_coverage=1 00:24:25.407 --rc genhtml_function_coverage=1 00:24:25.407 --rc genhtml_legend=1 00:24:25.407 --rc geninfo_all_blocks=1 00:24:25.407 --rc geninfo_unexecuted_blocks=1 00:24:25.407 00:24:25.407 ' 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:25.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.407 --rc genhtml_branch_coverage=1 00:24:25.407 --rc genhtml_function_coverage=1 00:24:25.407 --rc genhtml_legend=1 00:24:25.407 --rc geninfo_all_blocks=1 00:24:25.407 --rc geninfo_unexecuted_blocks=1 00:24:25.407 00:24:25.407 ' 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.407 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.408 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.408 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.408 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:25.408 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.408 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:25.408 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:25.408 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:25.408 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.408 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.408 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.408 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:25.408 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:25.408 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:25.408 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:25.408 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@458 -- # nvmf_veth_init 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:25.666 Cannot find device "nvmf_init_br" 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:25.666 Cannot find device "nvmf_init_br2" 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:25.666 Cannot find device "nvmf_tgt_br" 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:25.666 Cannot find device "nvmf_tgt_br2" 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:25.666 Cannot find device "nvmf_init_br" 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:25.666 Cannot find device "nvmf_init_br2" 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:25.666 Cannot find device "nvmf_tgt_br" 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:25.666 Cannot find device "nvmf_tgt_br2" 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:25.666 Cannot find device "nvmf_br" 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:25.666 Cannot find device "nvmf_init_if" 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:25.666 Cannot find device "nvmf_init_if2" 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:25.666 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:25.666 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:25.666 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:25.925 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:25.925 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:25.925 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:25.925 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:25.925 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:25.925 19:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:25.925 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:25.925 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:24:25.925 00:24:25.925 --- 10.0.0.3 ping statistics --- 00:24:25.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.925 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:25.925 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:25.925 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:24:25.925 00:24:25.925 --- 10.0.0.4 ping statistics --- 00:24:25.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.925 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:25.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:25.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:24:25.925 00:24:25.925 --- 10.0.0.1 ping statistics --- 00:24:25.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.925 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:25.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:25.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:24:25.925 00:24:25.925 --- 10.0.0.2 ping statistics --- 00:24:25.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.925 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # return 0 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=73878 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 73878 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 73878 ']' 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:25.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:25.925 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:26.183 [2024-10-17 19:26:35.218123] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:24:26.183 [2024-10-17 19:26:35.218285] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.183 [2024-10-17 19:26:35.360346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.183 [2024-10-17 19:26:35.425791] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.183 [2024-10-17 19:26:35.425895] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.183 [2024-10-17 19:26:35.425917] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.183 [2024-10-17 19:26:35.425933] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.183 [2024-10-17 19:26:35.425947] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.183 [2024-10-17 19:26:35.426464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.441 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:26.441 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:24:26.441 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:26.441 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:26.441 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:26.441 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:26.441 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:26.441 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:24:26.441 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:26.441 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.441 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:26.441 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.441 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:26.441 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.441 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:26.441 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.441 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:26.441 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.441 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:26.441 [2024-10-17 19:26:35.596076] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:26.441 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.441 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:26.441 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.441 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:26.441 Malloc0 00:24:26.442 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.442 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:26.442 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.442 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:26.442 [2024-10-17 19:26:35.667553] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:26.442 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.442 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:26.442 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.442 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:26.442 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.442 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:26.442 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.442 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:26.442 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.442 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:26.442 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.442 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:26.442 [2024-10-17 19:26:35.691647] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:26.442 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.442 19:26:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:24:26.698 [2024-10-17 19:26:35.879319] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:28.071 Initializing NVMe Controllers 00:24:28.071 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:24:28.071 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:28.071 Initialization complete. Launching workers. 00:24:28.071 ======================================================== 00:24:28.071 Latency(us) 00:24:28.071 Device Information : IOPS MiB/s Average min max 00:24:28.071 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 500.00 62.50 8056.30 7476.83 11503.12 00:24:28.071 ======================================================== 00:24:28.071 Total : 500.00 62.50 8056.30 7476.83 11503.12 00:24:28.071 00:24:28.071 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:28.071 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:28.072 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.072 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:28.072 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.072 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:24:28.072 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:24:28.072 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:28.072 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:28.072 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:28.072 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:28.072 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:28.072 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:28.072 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:28.072 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:28.072 rmmod nvme_tcp 00:24:28.072 rmmod nvme_fabrics 00:24:28.072 rmmod nvme_keyring 00:24:28.072 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:28.072 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:28.072 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:28.072 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 73878 ']' 00:24:28.072 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 73878 00:24:28.072 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 73878 ']' 00:24:28.072 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 73878 00:24:28.330 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:24:28.330 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:28.330 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73878 00:24:28.330 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:28.330 killing process with pid 73878 00:24:28.330 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:28.330 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73878' 00:24:28.330 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 73878 00:24:28.330 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 73878 00:24:28.330 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:28.330 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:28.330 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:28.330 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:28.330 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:24:28.330 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:24:28.330 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:28.330 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:28.330 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:28.330 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:28.331 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:28.589 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:28.590 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:28.590 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:28.590 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:28.590 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:28.590 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:28.590 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:28.590 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:28.590 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:28.590 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:28.590 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:28.590 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:28.590 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.590 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.590 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.590 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:24:28.590 00:24:28.590 real 0m3.383s 00:24:28.590 user 0m2.632s 00:24:28.590 sys 0m0.835s 00:24:28.590 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:28.590 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:28.590 ************************************ 00:24:28.590 END TEST nvmf_wait_for_buf 00:24:28.590 ************************************ 00:24:28.849 19:26:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:24:28.849 19:26:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:24:28.849 19:26:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:24:28.849 00:24:28.849 real 5m21.932s 00:24:28.849 user 11m20.442s 00:24:28.849 sys 1m11.776s 00:24:28.849 19:26:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:28.849 19:26:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:28.849 ************************************ 00:24:28.849 END TEST nvmf_target_extra 00:24:28.849 ************************************ 00:24:28.849 19:26:37 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:28.849 19:26:37 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:28.849 19:26:37 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:28.849 19:26:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:28.849 ************************************ 00:24:28.849 START TEST nvmf_host 00:24:28.849 ************************************ 00:24:28.849 19:26:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:28.849 * Looking for test storage... 00:24:28.849 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:24:28.849 19:26:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:28.849 19:26:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:24:28.849 19:26:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:29.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.107 --rc genhtml_branch_coverage=1 00:24:29.107 --rc genhtml_function_coverage=1 00:24:29.107 --rc genhtml_legend=1 00:24:29.107 --rc geninfo_all_blocks=1 00:24:29.107 --rc geninfo_unexecuted_blocks=1 00:24:29.107 00:24:29.107 ' 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:29.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.107 --rc genhtml_branch_coverage=1 00:24:29.107 --rc genhtml_function_coverage=1 00:24:29.107 --rc genhtml_legend=1 00:24:29.107 --rc geninfo_all_blocks=1 00:24:29.107 --rc geninfo_unexecuted_blocks=1 00:24:29.107 00:24:29.107 ' 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:29.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.107 --rc genhtml_branch_coverage=1 00:24:29.107 --rc genhtml_function_coverage=1 00:24:29.107 --rc genhtml_legend=1 00:24:29.107 --rc geninfo_all_blocks=1 00:24:29.107 --rc geninfo_unexecuted_blocks=1 00:24:29.107 00:24:29.107 ' 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:29.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.107 --rc genhtml_branch_coverage=1 00:24:29.107 --rc genhtml_function_coverage=1 00:24:29.107 --rc genhtml_legend=1 00:24:29.107 --rc geninfo_all_blocks=1 00:24:29.107 --rc geninfo_unexecuted_blocks=1 00:24:29.107 00:24:29.107 ' 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:29.107 19:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:29.108 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:29.108 19:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:29.108 19:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:29.108 19:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:29.108 19:26:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:29.108 19:26:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:29.108 19:26:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:24:29.108 19:26:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:29.108 19:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:29.108 19:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:29.108 19:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.108 ************************************ 00:24:29.108 START TEST nvmf_identify 00:24:29.108 ************************************ 00:24:29.108 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:29.108 * Looking for test storage... 00:24:29.108 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:29.108 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:29.108 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:24:29.108 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:29.108 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:29.108 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:29.108 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:29.108 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:29.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.393 --rc genhtml_branch_coverage=1 00:24:29.393 --rc genhtml_function_coverage=1 00:24:29.393 --rc genhtml_legend=1 00:24:29.393 --rc geninfo_all_blocks=1 00:24:29.393 --rc geninfo_unexecuted_blocks=1 00:24:29.393 00:24:29.393 ' 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:29.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.393 --rc genhtml_branch_coverage=1 00:24:29.393 --rc genhtml_function_coverage=1 00:24:29.393 --rc genhtml_legend=1 00:24:29.393 --rc geninfo_all_blocks=1 00:24:29.393 --rc geninfo_unexecuted_blocks=1 00:24:29.393 00:24:29.393 ' 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:29.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.393 --rc genhtml_branch_coverage=1 00:24:29.393 --rc genhtml_function_coverage=1 00:24:29.393 --rc genhtml_legend=1 00:24:29.393 --rc geninfo_all_blocks=1 00:24:29.393 --rc geninfo_unexecuted_blocks=1 00:24:29.393 00:24:29.393 ' 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:29.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.393 --rc genhtml_branch_coverage=1 00:24:29.393 --rc genhtml_function_coverage=1 00:24:29.393 --rc genhtml_legend=1 00:24:29.393 --rc geninfo_all_blocks=1 00:24:29.393 --rc geninfo_unexecuted_blocks=1 00:24:29.393 00:24:29.393 ' 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:29.393 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.393 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # nvmf_veth_init 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:29.394 Cannot find device "nvmf_init_br" 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:29.394 Cannot find device "nvmf_init_br2" 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:29.394 Cannot find device "nvmf_tgt_br" 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:29.394 Cannot find device "nvmf_tgt_br2" 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:29.394 Cannot find device "nvmf_init_br" 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:29.394 Cannot find device "nvmf_init_br2" 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:29.394 Cannot find device "nvmf_tgt_br" 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:29.394 Cannot find device "nvmf_tgt_br2" 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:29.394 Cannot find device "nvmf_br" 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:29.394 Cannot find device "nvmf_init_if" 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:29.394 Cannot find device "nvmf_init_if2" 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:29.394 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:29.394 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:29.394 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:29.666 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:29.666 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.113 ms 00:24:29.666 00:24:29.666 --- 10.0.0.3 ping statistics --- 00:24:29.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.666 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:29.666 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:29.666 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.091 ms 00:24:29.666 00:24:29.666 --- 10.0.0.4 ping statistics --- 00:24:29.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.666 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:29.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:29.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:24:29.666 00:24:29.666 --- 10.0.0.1 ping statistics --- 00:24:29.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.666 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:29.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:29.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:24:29.666 00:24:29.666 --- 10.0.0.2 ping statistics --- 00:24:29.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.666 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # return 0 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74195 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74195 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 74195 ']' 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:29.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:29.666 19:26:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:29.926 [2024-10-17 19:26:38.928235] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:24:29.926 [2024-10-17 19:26:38.928385] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:29.926 [2024-10-17 19:26:39.073341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:29.926 [2024-10-17 19:26:39.149736] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:29.926 [2024-10-17 19:26:39.150119] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:29.926 [2024-10-17 19:26:39.150281] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:29.926 [2024-10-17 19:26:39.150409] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:29.926 [2024-10-17 19:26:39.150530] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:29.926 [2024-10-17 19:26:39.151897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.926 [2024-10-17 19:26:39.152042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.926 [2024-10-17 19:26:39.152648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:29.926 [2024-10-17 19:26:39.152663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.184 [2024-10-17 19:26:39.212962] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:30.184 [2024-10-17 19:26:39.297921] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:30.184 Malloc0 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:30.184 [2024-10-17 19:26:39.418242] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.184 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:30.184 [ 00:24:30.184 { 00:24:30.184 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:30.184 "subtype": "Discovery", 00:24:30.184 "listen_addresses": [ 00:24:30.184 { 00:24:30.184 "trtype": "TCP", 00:24:30.184 "adrfam": "IPv4", 00:24:30.184 "traddr": "10.0.0.3", 00:24:30.184 "trsvcid": "4420" 00:24:30.184 } 00:24:30.184 ], 00:24:30.184 "allow_any_host": true, 00:24:30.184 "hosts": [] 00:24:30.184 }, 00:24:30.184 { 00:24:30.184 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:30.444 "subtype": "NVMe", 00:24:30.444 "listen_addresses": [ 00:24:30.444 { 00:24:30.444 "trtype": "TCP", 00:24:30.444 "adrfam": "IPv4", 00:24:30.444 "traddr": "10.0.0.3", 00:24:30.444 "trsvcid": "4420" 00:24:30.444 } 00:24:30.444 ], 00:24:30.444 "allow_any_host": true, 00:24:30.444 "hosts": [], 00:24:30.444 "serial_number": "SPDK00000000000001", 00:24:30.444 "model_number": "SPDK bdev Controller", 00:24:30.444 "max_namespaces": 32, 00:24:30.444 "min_cntlid": 1, 00:24:30.444 "max_cntlid": 65519, 00:24:30.444 "namespaces": [ 00:24:30.444 { 00:24:30.444 "nsid": 1, 00:24:30.444 "bdev_name": "Malloc0", 00:24:30.444 "name": "Malloc0", 00:24:30.444 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:30.444 "eui64": "ABCDEF0123456789", 00:24:30.444 "uuid": "953d9c55-e668-4e33-b4db-0e69bae327f2" 00:24:30.444 } 00:24:30.444 ] 00:24:30.444 } 00:24:30.444 ] 00:24:30.444 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.444 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:30.444 [2024-10-17 19:26:39.469439] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:24:30.444 [2024-10-17 19:26:39.469512] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74227 ] 00:24:30.444 [2024-10-17 19:26:39.618792] nvme_ctrlr.c:1609:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:30.444 [2024-10-17 19:26:39.618897] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:30.444 [2024-10-17 19:26:39.618907] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:30.444 [2024-10-17 19:26:39.618926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:30.444 [2024-10-17 19:26:39.618940] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:30.444 [2024-10-17 19:26:39.619401] nvme_ctrlr.c:1609:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:30.444 [2024-10-17 19:26:39.619481] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x245c750 0 00:24:30.444 [2024-10-17 19:26:39.625172] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:30.444 [2024-10-17 19:26:39.625204] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:30.444 [2024-10-17 19:26:39.625213] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:30.444 [2024-10-17 19:26:39.625218] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:30.444 [2024-10-17 19:26:39.625271] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.444 [2024-10-17 19:26:39.625282] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.444 [2024-10-17 19:26:39.625287] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x245c750) 00:24:30.444 [2024-10-17 19:26:39.625307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:30.444 [2024-10-17 19:26:39.625355] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c0840, cid 0, qid 0 00:24:30.444 [2024-10-17 19:26:39.633166] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.444 [2024-10-17 19:26:39.633189] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.444 [2024-10-17 19:26:39.633195] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.444 [2024-10-17 19:26:39.633203] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c0840) on tqpair=0x245c750 00:24:30.444 [2024-10-17 19:26:39.633218] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:30.444 [2024-10-17 19:26:39.633229] nvme_ctrlr.c:1609:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:30.444 [2024-10-17 19:26:39.633237] nvme_ctrlr.c:1609:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:30.444 [2024-10-17 19:26:39.633267] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.444 [2024-10-17 19:26:39.633278] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.444 [2024-10-17 19:26:39.633289] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x245c750) 00:24:30.444 [2024-10-17 19:26:39.633306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.444 [2024-10-17 19:26:39.633358] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c0840, cid 0, qid 0 00:24:30.444 [2024-10-17 19:26:39.633430] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.444 [2024-10-17 19:26:39.633446] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.444 [2024-10-17 19:26:39.633454] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.444 [2024-10-17 19:26:39.633463] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c0840) on tqpair=0x245c750 00:24:30.444 [2024-10-17 19:26:39.633486] nvme_ctrlr.c:1609:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:30.444 [2024-10-17 19:26:39.633503] nvme_ctrlr.c:1609:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:30.444 [2024-10-17 19:26:39.633519] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.444 [2024-10-17 19:26:39.633529] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.444 [2024-10-17 19:26:39.633537] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x245c750) 00:24:30.444 [2024-10-17 19:26:39.633553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.444 [2024-10-17 19:26:39.633595] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c0840, cid 0, qid 0 00:24:30.444 [2024-10-17 19:26:39.633640] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.444 [2024-10-17 19:26:39.633654] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.444 [2024-10-17 19:26:39.633662] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.444 [2024-10-17 19:26:39.633671] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c0840) on tqpair=0x245c750 00:24:30.444 [2024-10-17 19:26:39.633682] nvme_ctrlr.c:1609:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:30.445 [2024-10-17 19:26:39.633699] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:30.445 [2024-10-17 19:26:39.633712] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.445 [2024-10-17 19:26:39.633717] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.445 [2024-10-17 19:26:39.633722] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x245c750) 00:24:30.445 [2024-10-17 19:26:39.633733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.445 [2024-10-17 19:26:39.633760] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c0840, cid 0, qid 0 00:24:30.445 [2024-10-17 19:26:39.633825] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.445 [2024-10-17 19:26:39.633836] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.445 [2024-10-17 19:26:39.633841] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.445 [2024-10-17 19:26:39.633846] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c0840) on tqpair=0x245c750 00:24:30.445 [2024-10-17 19:26:39.633854] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:30.445 [2024-10-17 19:26:39.633867] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.445 [2024-10-17 19:26:39.633874] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.445 [2024-10-17 19:26:39.633879] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x245c750) 00:24:30.445 [2024-10-17 19:26:39.633888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.445 [2024-10-17 19:26:39.633912] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c0840, cid 0, qid 0 00:24:30.445 [2024-10-17 19:26:39.633964] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.445 [2024-10-17 19:26:39.633972] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.445 [2024-10-17 19:26:39.633987] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.445 [2024-10-17 19:26:39.633992] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c0840) on tqpair=0x245c750 00:24:30.445 [2024-10-17 19:26:39.633999] nvme_ctrlr.c:3950:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:30.445 [2024-10-17 19:26:39.634006] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:30.445 [2024-10-17 19:26:39.634016] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:30.445 [2024-10-17 19:26:39.634124] nvme_ctrlr.c:4148:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:30.445 [2024-10-17 19:26:39.634147] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:30.445 [2024-10-17 19:26:39.634160] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.445 [2024-10-17 19:26:39.634166] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.445 [2024-10-17 19:26:39.634171] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x245c750) 00:24:30.445 [2024-10-17 19:26:39.634181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.445 [2024-10-17 19:26:39.634206] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c0840, cid 0, qid 0 00:24:30.445 [2024-10-17 19:26:39.634262] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.445 [2024-10-17 19:26:39.634271] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.445 [2024-10-17 19:26:39.634276] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.445 [2024-10-17 19:26:39.634281] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c0840) on tqpair=0x245c750 00:24:30.445 [2024-10-17 19:26:39.634288] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:30.445 [2024-10-17 19:26:39.634301] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.445 [2024-10-17 19:26:39.634307] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.445 [2024-10-17 19:26:39.634312] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x245c750) 00:24:30.445 [2024-10-17 19:26:39.634322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.445 [2024-10-17 19:26:39.634343] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c0840, cid 0, qid 0 00:24:30.445 [2024-10-17 19:26:39.634391] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.445 [2024-10-17 19:26:39.634399] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.445 [2024-10-17 19:26:39.634404] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.445 [2024-10-17 19:26:39.634409] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c0840) on tqpair=0x245c750 00:24:30.445 [2024-10-17 19:26:39.634415] nvme_ctrlr.c:3985:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:30.445 [2024-10-17 19:26:39.634422] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:30.445 [2024-10-17 19:26:39.634432] nvme_ctrlr.c:1609:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:30.445 [2024-10-17 19:26:39.634447] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:30.445 [2024-10-17 19:26:39.634461] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.445 [2024-10-17 19:26:39.634466] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x245c750) 00:24:30.445 [2024-10-17 19:26:39.634477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.445 [2024-10-17 19:26:39.634505] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c0840, cid 0, qid 0 00:24:30.445 [2024-10-17 19:26:39.634604] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:30.445 [2024-10-17 19:26:39.634615] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:30.445 [2024-10-17 19:26:39.634620] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:30.445 [2024-10-17 19:26:39.634626] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x245c750): datao=0, datal=4096, cccid=0 00:24:30.445 [2024-10-17 19:26:39.634633] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24c0840) on tqpair(0x245c750): expected_datao=0, payload_size=4096 00:24:30.445 [2024-10-17 19:26:39.634640] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.445 [2024-10-17 19:26:39.634651] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:30.445 [2024-10-17 19:26:39.634657] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:30.445 [2024-10-17 19:26:39.634669] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.445 [2024-10-17 19:26:39.634677] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.445 [2024-10-17 19:26:39.634682] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.445 [2024-10-17 19:26:39.634687] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c0840) on tqpair=0x245c750 00:24:30.445 [2024-10-17 19:26:39.634699] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:30.445 [2024-10-17 19:26:39.634706] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:30.445 [2024-10-17 19:26:39.634713] nvme_ctrlr.c:2130:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:30.445 [2024-10-17 19:26:39.634720] nvme_ctrlr.c:2154:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:30.445 [2024-10-17 19:26:39.634726] nvme_ctrlr.c:2169:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:30.445 [2024-10-17 19:26:39.634733] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:30.445 [2024-10-17 19:26:39.634744] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:30.445 [2024-10-17 19:26:39.634755] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.445 [2024-10-17 19:26:39.634761] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.445 [2024-10-17 19:26:39.634766] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x245c750) 00:24:30.445 [2024-10-17 19:26:39.634776] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:30.445 [2024-10-17 19:26:39.634800] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c0840, cid 0, qid 0 00:24:30.445 [2024-10-17 19:26:39.634855] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.445 [2024-10-17 19:26:39.634864] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.445 [2024-10-17 19:26:39.634869] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.445 [2024-10-17 19:26:39.634874] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c0840) on tqpair=0x245c750 00:24:30.445 [2024-10-17 19:26:39.634885] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.445 [2024-10-17 19:26:39.634890] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.445 [2024-10-17 19:26:39.634895] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x245c750) 00:24:30.446 [2024-10-17 19:26:39.634904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.446 [2024-10-17 19:26:39.634914] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.634919] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.634924] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x245c750) 00:24:30.446 [2024-10-17 19:26:39.634932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.446 [2024-10-17 19:26:39.634940] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.634946] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.634951] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x245c750) 00:24:30.446 [2024-10-17 19:26:39.634958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.446 [2024-10-17 19:26:39.634967] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.634972] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.634977] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x245c750) 00:24:30.446 [2024-10-17 19:26:39.634984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.446 [2024-10-17 19:26:39.634994] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:30.446 [2024-10-17 19:26:39.635018] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:30.446 [2024-10-17 19:26:39.635042] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.635049] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x245c750) 00:24:30.446 [2024-10-17 19:26:39.635059] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.446 [2024-10-17 19:26:39.635086] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c0840, cid 0, qid 0 00:24:30.446 [2024-10-17 19:26:39.635095] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c09c0, cid 1, qid 0 00:24:30.446 [2024-10-17 19:26:39.635102] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c0b40, cid 2, qid 0 00:24:30.446 [2024-10-17 19:26:39.635108] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c0cc0, cid 3, qid 0 00:24:30.446 [2024-10-17 19:26:39.635115] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c0e40, cid 4, qid 0 00:24:30.446 [2024-10-17 19:26:39.635233] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.446 [2024-10-17 19:26:39.635244] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.446 [2024-10-17 19:26:39.635249] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.635254] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c0e40) on tqpair=0x245c750 00:24:30.446 [2024-10-17 19:26:39.635261] nvme_ctrlr.c:3103:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:30.446 [2024-10-17 19:26:39.635269] nvme_ctrlr.c:1609:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:30.446 [2024-10-17 19:26:39.635284] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.635290] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x245c750) 00:24:30.446 [2024-10-17 19:26:39.635300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.446 [2024-10-17 19:26:39.635323] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c0e40, cid 4, qid 0 00:24:30.446 [2024-10-17 19:26:39.635393] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:30.446 [2024-10-17 19:26:39.635402] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:30.446 [2024-10-17 19:26:39.635407] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.635412] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x245c750): datao=0, datal=4096, cccid=4 00:24:30.446 [2024-10-17 19:26:39.635418] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24c0e40) on tqpair(0x245c750): expected_datao=0, payload_size=4096 00:24:30.446 [2024-10-17 19:26:39.635424] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.635433] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.635438] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.635449] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.446 [2024-10-17 19:26:39.635458] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.446 [2024-10-17 19:26:39.635463] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.635468] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c0e40) on tqpair=0x245c750 00:24:30.446 [2024-10-17 19:26:39.635486] nvme_ctrlr.c:4246:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:30.446 [2024-10-17 19:26:39.635535] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.635544] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x245c750) 00:24:30.446 [2024-10-17 19:26:39.635554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.446 [2024-10-17 19:26:39.635564] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.635569] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.635575] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x245c750) 00:24:30.446 [2024-10-17 19:26:39.635587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.446 [2024-10-17 19:26:39.635627] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c0e40, cid 4, qid 0 00:24:30.446 [2024-10-17 19:26:39.635641] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c0fc0, cid 5, qid 0 00:24:30.446 [2024-10-17 19:26:39.635740] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:30.446 [2024-10-17 19:26:39.635752] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:30.446 [2024-10-17 19:26:39.635757] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.635762] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x245c750): datao=0, datal=1024, cccid=4 00:24:30.446 [2024-10-17 19:26:39.635768] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24c0e40) on tqpair(0x245c750): expected_datao=0, payload_size=1024 00:24:30.446 [2024-10-17 19:26:39.635774] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.635784] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.635789] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.635797] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.446 [2024-10-17 19:26:39.635804] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.446 [2024-10-17 19:26:39.635809] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.635814] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c0fc0) on tqpair=0x245c750 00:24:30.446 [2024-10-17 19:26:39.635839] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.446 [2024-10-17 19:26:39.635848] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.446 [2024-10-17 19:26:39.635853] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.635858] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c0e40) on tqpair=0x245c750 00:24:30.446 [2024-10-17 19:26:39.635875] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.635881] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x245c750) 00:24:30.446 [2024-10-17 19:26:39.635891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.446 [2024-10-17 19:26:39.635920] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c0e40, cid 4, qid 0 00:24:30.446 [2024-10-17 19:26:39.635989] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:30.446 [2024-10-17 19:26:39.636006] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:30.446 [2024-10-17 19:26:39.636014] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.636022] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x245c750): datao=0, datal=3072, cccid=4 00:24:30.446 [2024-10-17 19:26:39.636032] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24c0e40) on tqpair(0x245c750): expected_datao=0, payload_size=3072 00:24:30.446 [2024-10-17 19:26:39.636041] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.636056] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.636064] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.636080] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.446 [2024-10-17 19:26:39.636093] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.446 [2024-10-17 19:26:39.636101] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.636110] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c0e40) on tqpair=0x245c750 00:24:30.446 [2024-10-17 19:26:39.636150] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.446 [2024-10-17 19:26:39.636162] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x245c750) 00:24:30.446 [2024-10-17 19:26:39.636176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.446 [2024-10-17 19:26:39.636217] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c0e40, cid 4, qid 0 00:24:30.446 [2024-10-17 19:26:39.636281] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:30.447 [2024-10-17 19:26:39.636290] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:30.447 [2024-10-17 19:26:39.636295] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:30.447 [2024-10-17 19:26:39.636300] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x245c750): datao=0, datal=8, cccid=4 00:24:30.447 [2024-10-17 19:26:39.636306] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24c0e40) on tqpair(0x245c750): expected_datao=0, payload_size=8 00:24:30.447 [2024-10-17 19:26:39.636312] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.447 [2024-10-17 19:26:39.636321] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:30.447 [2024-10-17 19:26:39.636326] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:30.447 [2024-10-17 19:26:39.636345] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.447 [2024-10-17 19:26:39.636354] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.447 [2024-10-17 19:26:39.636359] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.447 [2024-10-17 19:26:39.636364] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c0e40) on tqpair=0x245c750 00:24:30.447 ===================================================== 00:24:30.447 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:30.447 ===================================================== 00:24:30.447 Controller Capabilities/Features 00:24:30.447 ================================ 00:24:30.447 Vendor ID: 0000 00:24:30.447 Subsystem Vendor ID: 0000 00:24:30.447 Serial Number: .................... 00:24:30.447 Model Number: ........................................ 00:24:30.447 Firmware Version: 25.01 00:24:30.447 Recommended Arb Burst: 0 00:24:30.447 IEEE OUI Identifier: 00 00 00 00:24:30.447 Multi-path I/O 00:24:30.447 May have multiple subsystem ports: No 00:24:30.447 May have multiple controllers: No 00:24:30.447 Associated with SR-IOV VF: No 00:24:30.447 Max Data Transfer Size: 131072 00:24:30.447 Max Number of Namespaces: 0 00:24:30.447 Max Number of I/O Queues: 1024 00:24:30.447 NVMe Specification Version (VS): 1.3 00:24:30.447 NVMe Specification Version (Identify): 1.3 00:24:30.447 Maximum Queue Entries: 128 00:24:30.447 Contiguous Queues Required: Yes 00:24:30.447 Arbitration Mechanisms Supported 00:24:30.447 Weighted Round Robin: Not Supported 00:24:30.447 Vendor Specific: Not Supported 00:24:30.447 Reset Timeout: 15000 ms 00:24:30.447 Doorbell Stride: 4 bytes 00:24:30.447 NVM Subsystem Reset: Not Supported 00:24:30.447 Command Sets Supported 00:24:30.447 NVM Command Set: Supported 00:24:30.447 Boot Partition: Not Supported 00:24:30.447 Memory Page Size Minimum: 4096 bytes 00:24:30.447 Memory Page Size Maximum: 4096 bytes 00:24:30.447 Persistent Memory Region: Not Supported 00:24:30.447 Optional Asynchronous Events Supported 00:24:30.447 Namespace Attribute Notices: Not Supported 00:24:30.447 Firmware Activation Notices: Not Supported 00:24:30.447 ANA Change Notices: Not Supported 00:24:30.447 PLE Aggregate Log Change Notices: Not Supported 00:24:30.447 LBA Status Info Alert Notices: Not Supported 00:24:30.447 EGE Aggregate Log Change Notices: Not Supported 00:24:30.447 Normal NVM Subsystem Shutdown event: Not Supported 00:24:30.447 Zone Descriptor Change Notices: Not Supported 00:24:30.447 Discovery Log Change Notices: Supported 00:24:30.447 Controller Attributes 00:24:30.447 128-bit Host Identifier: Not Supported 00:24:30.447 Non-Operational Permissive Mode: Not Supported 00:24:30.447 NVM Sets: Not Supported 00:24:30.447 Read Recovery Levels: Not Supported 00:24:30.447 Endurance Groups: Not Supported 00:24:30.447 Predictable Latency Mode: Not Supported 00:24:30.447 Traffic Based Keep ALive: Not Supported 00:24:30.447 Namespace Granularity: Not Supported 00:24:30.447 SQ Associations: Not Supported 00:24:30.447 UUID List: Not Supported 00:24:30.447 Multi-Domain Subsystem: Not Supported 00:24:30.447 Fixed Capacity Management: Not Supported 00:24:30.447 Variable Capacity Management: Not Supported 00:24:30.447 Delete Endurance Group: Not Supported 00:24:30.447 Delete NVM Set: Not Supported 00:24:30.447 Extended LBA Formats Supported: Not Supported 00:24:30.447 Flexible Data Placement Supported: Not Supported 00:24:30.447 00:24:30.447 Controller Memory Buffer Support 00:24:30.447 ================================ 00:24:30.447 Supported: No 00:24:30.447 00:24:30.447 Persistent Memory Region Support 00:24:30.447 ================================ 00:24:30.447 Supported: No 00:24:30.447 00:24:30.447 Admin Command Set Attributes 00:24:30.447 ============================ 00:24:30.447 Security Send/Receive: Not Supported 00:24:30.447 Format NVM: Not Supported 00:24:30.447 Firmware Activate/Download: Not Supported 00:24:30.447 Namespace Management: Not Supported 00:24:30.447 Device Self-Test: Not Supported 00:24:30.447 Directives: Not Supported 00:24:30.447 NVMe-MI: Not Supported 00:24:30.447 Virtualization Management: Not Supported 00:24:30.447 Doorbell Buffer Config: Not Supported 00:24:30.447 Get LBA Status Capability: Not Supported 00:24:30.447 Command & Feature Lockdown Capability: Not Supported 00:24:30.447 Abort Command Limit: 1 00:24:30.447 Async Event Request Limit: 4 00:24:30.447 Number of Firmware Slots: N/A 00:24:30.447 Firmware Slot 1 Read-Only: N/A 00:24:30.447 Firmware Activation Without Reset: N/A 00:24:30.447 Multiple Update Detection Support: N/A 00:24:30.447 Firmware Update Granularity: No Information Provided 00:24:30.447 Per-Namespace SMART Log: No 00:24:30.447 Asymmetric Namespace Access Log Page: Not Supported 00:24:30.447 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:30.447 Command Effects Log Page: Not Supported 00:24:30.447 Get Log Page Extended Data: Supported 00:24:30.447 Telemetry Log Pages: Not Supported 00:24:30.447 Persistent Event Log Pages: Not Supported 00:24:30.447 Supported Log Pages Log Page: May Support 00:24:30.447 Commands Supported & Effects Log Page: Not Supported 00:24:30.447 Feature Identifiers & Effects Log Page:May Support 00:24:30.447 NVMe-MI Commands & Effects Log Page: May Support 00:24:30.447 Data Area 4 for Telemetry Log: Not Supported 00:24:30.447 Error Log Page Entries Supported: 128 00:24:30.447 Keep Alive: Not Supported 00:24:30.447 00:24:30.447 NVM Command Set Attributes 00:24:30.447 ========================== 00:24:30.447 Submission Queue Entry Size 00:24:30.447 Max: 1 00:24:30.447 Min: 1 00:24:30.447 Completion Queue Entry Size 00:24:30.447 Max: 1 00:24:30.447 Min: 1 00:24:30.447 Number of Namespaces: 0 00:24:30.447 Compare Command: Not Supported 00:24:30.447 Write Uncorrectable Command: Not Supported 00:24:30.447 Dataset Management Command: Not Supported 00:24:30.447 Write Zeroes Command: Not Supported 00:24:30.447 Set Features Save Field: Not Supported 00:24:30.447 Reservations: Not Supported 00:24:30.447 Timestamp: Not Supported 00:24:30.447 Copy: Not Supported 00:24:30.447 Volatile Write Cache: Not Present 00:24:30.447 Atomic Write Unit (Normal): 1 00:24:30.447 Atomic Write Unit (PFail): 1 00:24:30.447 Atomic Compare & Write Unit: 1 00:24:30.447 Fused Compare & Write: Supported 00:24:30.447 Scatter-Gather List 00:24:30.447 SGL Command Set: Supported 00:24:30.447 SGL Keyed: Supported 00:24:30.447 SGL Bit Bucket Descriptor: Not Supported 00:24:30.447 SGL Metadata Pointer: Not Supported 00:24:30.447 Oversized SGL: Not Supported 00:24:30.447 SGL Metadata Address: Not Supported 00:24:30.447 SGL Offset: Supported 00:24:30.447 Transport SGL Data Block: Not Supported 00:24:30.447 Replay Protected Memory Block: Not Supported 00:24:30.447 00:24:30.447 Firmware Slot Information 00:24:30.447 ========================= 00:24:30.447 Active slot: 0 00:24:30.447 00:24:30.447 00:24:30.447 Error Log 00:24:30.447 ========= 00:24:30.447 00:24:30.447 Active Namespaces 00:24:30.447 ================= 00:24:30.447 Discovery Log Page 00:24:30.447 ================== 00:24:30.447 Generation Counter: 2 00:24:30.447 Number of Records: 2 00:24:30.447 Record Format: 0 00:24:30.447 00:24:30.447 Discovery Log Entry 0 00:24:30.447 ---------------------- 00:24:30.447 Transport Type: 3 (TCP) 00:24:30.447 Address Family: 1 (IPv4) 00:24:30.447 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:30.447 Entry Flags: 00:24:30.447 Duplicate Returned Information: 1 00:24:30.447 Explicit Persistent Connection Support for Discovery: 1 00:24:30.447 Transport Requirements: 00:24:30.447 Secure Channel: Not Required 00:24:30.447 Port ID: 0 (0x0000) 00:24:30.447 Controller ID: 65535 (0xffff) 00:24:30.447 Admin Max SQ Size: 128 00:24:30.447 Transport Service Identifier: 4420 00:24:30.447 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:30.447 Transport Address: 10.0.0.3 00:24:30.447 Discovery Log Entry 1 00:24:30.447 ---------------------- 00:24:30.447 Transport Type: 3 (TCP) 00:24:30.447 Address Family: 1 (IPv4) 00:24:30.447 Subsystem Type: 2 (NVM Subsystem) 00:24:30.447 Entry Flags: 00:24:30.447 Duplicate Returned Information: 0 00:24:30.447 Explicit Persistent Connection Support for Discovery: 0 00:24:30.448 Transport Requirements: 00:24:30.448 Secure Channel: Not Required 00:24:30.448 Port ID: 0 (0x0000) 00:24:30.448 Controller ID: 65535 (0xffff) 00:24:30.448 Admin Max SQ Size: 128 00:24:30.448 Transport Service Identifier: 4420 00:24:30.448 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:30.448 Transport Address: 10.0.0.3 [2024-10-17 19:26:39.636538] nvme_ctrlr.c:4443:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:30.448 [2024-10-17 19:26:39.636561] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c0840) on tqpair=0x245c750 00:24:30.448 [2024-10-17 19:26:39.636571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.448 [2024-10-17 19:26:39.636579] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c09c0) on tqpair=0x245c750 00:24:30.448 [2024-10-17 19:26:39.636585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.448 [2024-10-17 19:26:39.636592] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c0b40) on tqpair=0x245c750 00:24:30.448 [2024-10-17 19:26:39.636598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.448 [2024-10-17 19:26:39.636605] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c0cc0) on tqpair=0x245c750 00:24:30.448 [2024-10-17 19:26:39.636611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.448 [2024-10-17 19:26:39.636624] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.448 [2024-10-17 19:26:39.636630] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.448 [2024-10-17 19:26:39.636635] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x245c750) 00:24:30.448 [2024-10-17 19:26:39.636659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.448 [2024-10-17 19:26:39.636685] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c0cc0, cid 3, qid 0 00:24:30.448 [2024-10-17 19:26:39.636742] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.448 [2024-10-17 19:26:39.636750] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.448 [2024-10-17 19:26:39.636755] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.448 [2024-10-17 19:26:39.636759] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c0cc0) on tqpair=0x245c750 00:24:30.448 [2024-10-17 19:26:39.636767] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.448 [2024-10-17 19:26:39.636772] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.448 [2024-10-17 19:26:39.636776] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x245c750) 00:24:30.448 [2024-10-17 19:26:39.636783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.448 [2024-10-17 19:26:39.636806] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c0cc0, cid 3, qid 0 00:24:30.448 [2024-10-17 19:26:39.636872] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.448 [2024-10-17 19:26:39.636879] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.448 [2024-10-17 19:26:39.636883] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.448 [2024-10-17 19:26:39.636887] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c0cc0) on tqpair=0x245c750 00:24:30.448 [2024-10-17 19:26:39.636893] nvme_ctrlr.c:1193:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:30.448 [2024-10-17 19:26:39.636899] nvme_ctrlr.c:1196:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:30.448 [2024-10-17 19:26:39.636909] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.448 [2024-10-17 19:26:39.636914] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.448 [2024-10-17 19:26:39.636918] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x245c750) 00:24:30.448 [2024-10-17 19:26:39.636926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.448 [2024-10-17 19:26:39.636943] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c0cc0, cid 3, qid 0 00:24:30.448 [2024-10-17 19:26:39.636991] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.448 [2024-10-17 19:26:39.636998] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.448 [2024-10-17 19:26:39.637002] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.448 [2024-10-17 19:26:39.637006] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c0cc0) on tqpair=0x245c750 00:24:30.448 [2024-10-17 19:26:39.637017] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.448 [2024-10-17 19:26:39.637022] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.448 [2024-10-17 19:26:39.637026] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x245c750) 00:24:30.448 [2024-10-17 19:26:39.637033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.448 [2024-10-17 19:26:39.637050] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c0cc0, cid 3, qid 0 00:24:30.448 [2024-10-17 19:26:39.637095] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.448 [2024-10-17 19:26:39.637102] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.448 [2024-10-17 19:26:39.637106] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.448 [2024-10-17 19:26:39.637110] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c0cc0) on tqpair=0x245c750 00:24:30.448 [2024-10-17 19:26:39.637120] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.448 [2024-10-17 19:26:39.637125] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.448 [2024-10-17 19:26:39.637129] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x245c750) 00:24:30.448 [2024-10-17 19:26:39.637137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.448 [2024-10-17 19:26:39.641179] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c0cc0, cid 3, qid 0 00:24:30.448 [2024-10-17 19:26:39.641237] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.448 [2024-10-17 19:26:39.641254] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.448 [2024-10-17 19:26:39.641260] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.448 [2024-10-17 19:26:39.641268] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c0cc0) on tqpair=0x245c750 00:24:30.448 [2024-10-17 19:26:39.641283] nvme_ctrlr.c:1315:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:24:30.448 00:24:30.448 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:30.448 [2024-10-17 19:26:39.684855] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:24:30.448 [2024-10-17 19:26:39.684933] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74230 ] 00:24:30.711 [2024-10-17 19:26:39.828586] nvme_ctrlr.c:1609:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:30.711 [2024-10-17 19:26:39.828673] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:30.711 [2024-10-17 19:26:39.828681] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:30.711 [2024-10-17 19:26:39.828698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:30.711 [2024-10-17 19:26:39.828712] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:30.711 [2024-10-17 19:26:39.829167] nvme_ctrlr.c:1609:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:30.711 [2024-10-17 19:26:39.829238] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x17b8750 0 00:24:30.711 [2024-10-17 19:26:39.836160] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:30.711 [2024-10-17 19:26:39.836186] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:30.711 [2024-10-17 19:26:39.836193] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:30.711 [2024-10-17 19:26:39.836197] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:30.711 [2024-10-17 19:26:39.836241] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.711 [2024-10-17 19:26:39.836250] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.711 [2024-10-17 19:26:39.836254] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17b8750) 00:24:30.711 [2024-10-17 19:26:39.836272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:30.711 [2024-10-17 19:26:39.836304] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181c840, cid 0, qid 0 00:24:30.711 [2024-10-17 19:26:39.844154] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.711 [2024-10-17 19:26:39.844177] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.711 [2024-10-17 19:26:39.844182] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.711 [2024-10-17 19:26:39.844188] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181c840) on tqpair=0x17b8750 00:24:30.711 [2024-10-17 19:26:39.844200] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:30.711 [2024-10-17 19:26:39.844209] nvme_ctrlr.c:1609:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:30.711 [2024-10-17 19:26:39.844216] nvme_ctrlr.c:1609:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:30.711 [2024-10-17 19:26:39.844240] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.711 [2024-10-17 19:26:39.844246] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.711 [2024-10-17 19:26:39.844250] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17b8750) 00:24:30.711 [2024-10-17 19:26:39.844260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.711 [2024-10-17 19:26:39.844288] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181c840, cid 0, qid 0 00:24:30.711 [2024-10-17 19:26:39.844349] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.711 [2024-10-17 19:26:39.844356] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.711 [2024-10-17 19:26:39.844360] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.711 [2024-10-17 19:26:39.844365] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181c840) on tqpair=0x17b8750 00:24:30.711 [2024-10-17 19:26:39.844376] nvme_ctrlr.c:1609:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:30.711 [2024-10-17 19:26:39.844385] nvme_ctrlr.c:1609:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:30.711 [2024-10-17 19:26:39.844393] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.711 [2024-10-17 19:26:39.844401] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.711 [2024-10-17 19:26:39.844405] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17b8750) 00:24:30.711 [2024-10-17 19:26:39.844413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.711 [2024-10-17 19:26:39.844434] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181c840, cid 0, qid 0 00:24:30.711 [2024-10-17 19:26:39.844484] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.711 [2024-10-17 19:26:39.844491] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.711 [2024-10-17 19:26:39.844495] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.711 [2024-10-17 19:26:39.844499] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181c840) on tqpair=0x17b8750 00:24:30.711 [2024-10-17 19:26:39.844506] nvme_ctrlr.c:1609:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:30.711 [2024-10-17 19:26:39.844515] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:30.711 [2024-10-17 19:26:39.844523] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.711 [2024-10-17 19:26:39.844528] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.711 [2024-10-17 19:26:39.844532] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17b8750) 00:24:30.712 [2024-10-17 19:26:39.844540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.712 [2024-10-17 19:26:39.844558] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181c840, cid 0, qid 0 00:24:30.712 [2024-10-17 19:26:39.844601] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.712 [2024-10-17 19:26:39.844608] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.712 [2024-10-17 19:26:39.844612] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.712 [2024-10-17 19:26:39.844616] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181c840) on tqpair=0x17b8750 00:24:30.712 [2024-10-17 19:26:39.844622] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:30.712 [2024-10-17 19:26:39.844633] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.712 [2024-10-17 19:26:39.844638] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.712 [2024-10-17 19:26:39.844642] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17b8750) 00:24:30.712 [2024-10-17 19:26:39.844650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.712 [2024-10-17 19:26:39.844669] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181c840, cid 0, qid 0 00:24:30.712 [2024-10-17 19:26:39.844714] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.712 [2024-10-17 19:26:39.844721] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.712 [2024-10-17 19:26:39.844725] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.712 [2024-10-17 19:26:39.844729] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181c840) on tqpair=0x17b8750 00:24:30.712 [2024-10-17 19:26:39.844735] nvme_ctrlr.c:3950:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:30.712 [2024-10-17 19:26:39.844740] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:30.712 [2024-10-17 19:26:39.844749] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:30.712 [2024-10-17 19:26:39.844855] nvme_ctrlr.c:4148:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:30.712 [2024-10-17 19:26:39.844868] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:30.712 [2024-10-17 19:26:39.844879] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.712 [2024-10-17 19:26:39.844884] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.712 [2024-10-17 19:26:39.844888] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17b8750) 00:24:30.712 [2024-10-17 19:26:39.844896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.712 [2024-10-17 19:26:39.844917] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181c840, cid 0, qid 0 00:24:30.712 [2024-10-17 19:26:39.844967] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.712 [2024-10-17 19:26:39.844979] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.712 [2024-10-17 19:26:39.844983] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.712 [2024-10-17 19:26:39.844987] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181c840) on tqpair=0x17b8750 00:24:30.712 [2024-10-17 19:26:39.844993] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:30.712 [2024-10-17 19:26:39.845005] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.712 [2024-10-17 19:26:39.845010] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.712 [2024-10-17 19:26:39.845014] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17b8750) 00:24:30.712 [2024-10-17 19:26:39.845022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.712 [2024-10-17 19:26:39.845041] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181c840, cid 0, qid 0 00:24:30.712 [2024-10-17 19:26:39.845084] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.712 [2024-10-17 19:26:39.845091] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.712 [2024-10-17 19:26:39.845095] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.712 [2024-10-17 19:26:39.845099] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181c840) on tqpair=0x17b8750 00:24:30.712 [2024-10-17 19:26:39.845104] nvme_ctrlr.c:3985:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:30.712 [2024-10-17 19:26:39.845110] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:30.712 [2024-10-17 19:26:39.845141] nvme_ctrlr.c:1609:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:30.712 [2024-10-17 19:26:39.845153] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:30.712 [2024-10-17 19:26:39.845165] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.712 [2024-10-17 19:26:39.845170] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17b8750) 00:24:30.712 [2024-10-17 19:26:39.845178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.712 [2024-10-17 19:26:39.845200] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181c840, cid 0, qid 0 00:24:30.712 [2024-10-17 19:26:39.845303] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:30.712 [2024-10-17 19:26:39.845311] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:30.712 [2024-10-17 19:26:39.845315] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:30.712 [2024-10-17 19:26:39.845320] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17b8750): datao=0, datal=4096, cccid=0 00:24:30.712 [2024-10-17 19:26:39.845325] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x181c840) on tqpair(0x17b8750): expected_datao=0, payload_size=4096 00:24:30.712 [2024-10-17 19:26:39.845330] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.712 [2024-10-17 19:26:39.845340] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:30.712 [2024-10-17 19:26:39.845345] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:30.712 [2024-10-17 19:26:39.845354] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.712 [2024-10-17 19:26:39.845361] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.712 [2024-10-17 19:26:39.845365] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.712 [2024-10-17 19:26:39.845369] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181c840) on tqpair=0x17b8750 00:24:30.712 [2024-10-17 19:26:39.845386] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:30.712 [2024-10-17 19:26:39.845391] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:30.712 [2024-10-17 19:26:39.845396] nvme_ctrlr.c:2130:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:30.712 [2024-10-17 19:26:39.845401] nvme_ctrlr.c:2154:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:30.712 [2024-10-17 19:26:39.845406] nvme_ctrlr.c:2169:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:30.712 [2024-10-17 19:26:39.845411] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:30.712 [2024-10-17 19:26:39.845421] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:30.712 [2024-10-17 19:26:39.845430] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.712 [2024-10-17 19:26:39.845434] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.712 [2024-10-17 19:26:39.845438] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17b8750) 00:24:30.712 [2024-10-17 19:26:39.845447] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:30.712 [2024-10-17 19:26:39.845468] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181c840, cid 0, qid 0 00:24:30.712 [2024-10-17 19:26:39.845520] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.712 [2024-10-17 19:26:39.845527] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.712 [2024-10-17 19:26:39.845531] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.712 [2024-10-17 19:26:39.845535] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181c840) on tqpair=0x17b8750 00:24:30.712 [2024-10-17 19:26:39.845544] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.712 [2024-10-17 19:26:39.845548] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.712 [2024-10-17 19:26:39.845552] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17b8750) 00:24:30.712 [2024-10-17 19:26:39.845560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.712 [2024-10-17 19:26:39.845567] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.712 [2024-10-17 19:26:39.845571] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.712 [2024-10-17 19:26:39.845575] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x17b8750) 00:24:30.712 [2024-10-17 19:26:39.845582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.712 [2024-10-17 19:26:39.845589] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.712 [2024-10-17 19:26:39.845593] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.712 [2024-10-17 19:26:39.845597] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x17b8750) 00:24:30.712 [2024-10-17 19:26:39.845603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.712 [2024-10-17 19:26:39.845610] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.712 [2024-10-17 19:26:39.845614] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.712 [2024-10-17 19:26:39.845618] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.712 [2024-10-17 19:26:39.845624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.712 [2024-10-17 19:26:39.845630] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:30.712 [2024-10-17 19:26:39.845644] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:30.712 [2024-10-17 19:26:39.845652] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.712 [2024-10-17 19:26:39.845657] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17b8750) 00:24:30.712 [2024-10-17 19:26:39.845664] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.712 [2024-10-17 19:26:39.845685] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181c840, cid 0, qid 0 00:24:30.712 [2024-10-17 19:26:39.845692] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181c9c0, cid 1, qid 0 00:24:30.712 [2024-10-17 19:26:39.845697] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181cb40, cid 2, qid 0 00:24:30.712 [2024-10-17 19:26:39.845702] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.712 [2024-10-17 19:26:39.845707] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ce40, cid 4, qid 0 00:24:30.712 [2024-10-17 19:26:39.845794] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.712 [2024-10-17 19:26:39.845812] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.712 [2024-10-17 19:26:39.845818] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.845822] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ce40) on tqpair=0x17b8750 00:24:30.713 [2024-10-17 19:26:39.845828] nvme_ctrlr.c:3103:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:30.713 [2024-10-17 19:26:39.845834] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:30.713 [2024-10-17 19:26:39.845849] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:30.713 [2024-10-17 19:26:39.845857] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:30.713 [2024-10-17 19:26:39.845865] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.845870] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.845874] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17b8750) 00:24:30.713 [2024-10-17 19:26:39.845882] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:30.713 [2024-10-17 19:26:39.845913] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ce40, cid 4, qid 0 00:24:30.713 [2024-10-17 19:26:39.845960] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.713 [2024-10-17 19:26:39.845967] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.713 [2024-10-17 19:26:39.845971] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.845975] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ce40) on tqpair=0x17b8750 00:24:30.713 [2024-10-17 19:26:39.846045] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:30.713 [2024-10-17 19:26:39.846057] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:30.713 [2024-10-17 19:26:39.846066] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.846071] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17b8750) 00:24:30.713 [2024-10-17 19:26:39.846079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.713 [2024-10-17 19:26:39.846099] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ce40, cid 4, qid 0 00:24:30.713 [2024-10-17 19:26:39.846174] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:30.713 [2024-10-17 19:26:39.846183] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:30.713 [2024-10-17 19:26:39.846187] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.846191] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17b8750): datao=0, datal=4096, cccid=4 00:24:30.713 [2024-10-17 19:26:39.846196] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x181ce40) on tqpair(0x17b8750): expected_datao=0, payload_size=4096 00:24:30.713 [2024-10-17 19:26:39.846201] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.846209] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.846213] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.846222] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.713 [2024-10-17 19:26:39.846228] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.713 [2024-10-17 19:26:39.846232] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.846236] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ce40) on tqpair=0x17b8750 00:24:30.713 [2024-10-17 19:26:39.846254] nvme_ctrlr.c:4779:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:30.713 [2024-10-17 19:26:39.846267] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:30.713 [2024-10-17 19:26:39.846278] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:30.713 [2024-10-17 19:26:39.846286] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.846290] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17b8750) 00:24:30.713 [2024-10-17 19:26:39.846298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.713 [2024-10-17 19:26:39.846321] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ce40, cid 4, qid 0 00:24:30.713 [2024-10-17 19:26:39.846390] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:30.713 [2024-10-17 19:26:39.846398] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:30.713 [2024-10-17 19:26:39.846402] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.846406] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17b8750): datao=0, datal=4096, cccid=4 00:24:30.713 [2024-10-17 19:26:39.846411] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x181ce40) on tqpair(0x17b8750): expected_datao=0, payload_size=4096 00:24:30.713 [2024-10-17 19:26:39.846415] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.846423] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.846427] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.846436] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.713 [2024-10-17 19:26:39.846442] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.713 [2024-10-17 19:26:39.846446] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.846450] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ce40) on tqpair=0x17b8750 00:24:30.713 [2024-10-17 19:26:39.846463] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:30.713 [2024-10-17 19:26:39.846474] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:30.713 [2024-10-17 19:26:39.846483] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.846488] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17b8750) 00:24:30.713 [2024-10-17 19:26:39.846495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.713 [2024-10-17 19:26:39.846516] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ce40, cid 4, qid 0 00:24:30.713 [2024-10-17 19:26:39.846577] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:30.713 [2024-10-17 19:26:39.846584] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:30.713 [2024-10-17 19:26:39.846588] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.846592] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17b8750): datao=0, datal=4096, cccid=4 00:24:30.713 [2024-10-17 19:26:39.846597] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x181ce40) on tqpair(0x17b8750): expected_datao=0, payload_size=4096 00:24:30.713 [2024-10-17 19:26:39.846602] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.846609] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.846613] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.846622] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.713 [2024-10-17 19:26:39.846629] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.713 [2024-10-17 19:26:39.846632] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.846636] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ce40) on tqpair=0x17b8750 00:24:30.713 [2024-10-17 19:26:39.846651] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:30.713 [2024-10-17 19:26:39.846661] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:30.713 [2024-10-17 19:26:39.846672] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:30.713 [2024-10-17 19:26:39.846679] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:30.713 [2024-10-17 19:26:39.846685] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:30.713 [2024-10-17 19:26:39.846690] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:30.713 [2024-10-17 19:26:39.846696] nvme_ctrlr.c:3191:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:30.713 [2024-10-17 19:26:39.846701] nvme_ctrlr.c:1603:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:30.713 [2024-10-17 19:26:39.846708] nvme_ctrlr.c:1609:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:30.713 [2024-10-17 19:26:39.846727] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.846732] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17b8750) 00:24:30.713 [2024-10-17 19:26:39.846740] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.713 [2024-10-17 19:26:39.846748] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.846752] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.846756] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17b8750) 00:24:30.713 [2024-10-17 19:26:39.846762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.713 [2024-10-17 19:26:39.846785] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ce40, cid 4, qid 0 00:24:30.713 [2024-10-17 19:26:39.846793] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181cfc0, cid 5, qid 0 00:24:30.713 [2024-10-17 19:26:39.846857] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.713 [2024-10-17 19:26:39.846864] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.713 [2024-10-17 19:26:39.846868] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.846873] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ce40) on tqpair=0x17b8750 00:24:30.713 [2024-10-17 19:26:39.846880] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.713 [2024-10-17 19:26:39.846886] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.713 [2024-10-17 19:26:39.846890] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.846894] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181cfc0) on tqpair=0x17b8750 00:24:30.713 [2024-10-17 19:26:39.846905] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.846909] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17b8750) 00:24:30.713 [2024-10-17 19:26:39.846917] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.713 [2024-10-17 19:26:39.846935] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181cfc0, cid 5, qid 0 00:24:30.713 [2024-10-17 19:26:39.846981] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.713 [2024-10-17 19:26:39.846989] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.713 [2024-10-17 19:26:39.846992] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.846997] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181cfc0) on tqpair=0x17b8750 00:24:30.713 [2024-10-17 19:26:39.847008] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.713 [2024-10-17 19:26:39.847012] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17b8750) 00:24:30.714 [2024-10-17 19:26:39.847020] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.714 [2024-10-17 19:26:39.847038] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181cfc0, cid 5, qid 0 00:24:30.714 [2024-10-17 19:26:39.847088] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.714 [2024-10-17 19:26:39.847095] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.714 [2024-10-17 19:26:39.847099] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.714 [2024-10-17 19:26:39.847104] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181cfc0) on tqpair=0x17b8750 00:24:30.714 [2024-10-17 19:26:39.847114] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.714 [2024-10-17 19:26:39.847119] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17b8750) 00:24:30.714 [2024-10-17 19:26:39.847126] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.714 [2024-10-17 19:26:39.847159] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181cfc0, cid 5, qid 0 00:24:30.714 [2024-10-17 19:26:39.847214] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.714 [2024-10-17 19:26:39.847222] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.714 [2024-10-17 19:26:39.847226] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.714 [2024-10-17 19:26:39.847230] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181cfc0) on tqpair=0x17b8750 00:24:30.714 [2024-10-17 19:26:39.847255] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.714 [2024-10-17 19:26:39.847260] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17b8750) 00:24:30.714 [2024-10-17 19:26:39.847269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.714 [2024-10-17 19:26:39.847277] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.714 [2024-10-17 19:26:39.847281] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17b8750) 00:24:30.714 [2024-10-17 19:26:39.847288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.714 [2024-10-17 19:26:39.847296] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.714 [2024-10-17 19:26:39.847301] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x17b8750) 00:24:30.714 [2024-10-17 19:26:39.847307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.714 [2024-10-17 19:26:39.847316] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.714 [2024-10-17 19:26:39.847321] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x17b8750) 00:24:30.714 [2024-10-17 19:26:39.847327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.714 [2024-10-17 19:26:39.847349] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181cfc0, cid 5, qid 0 00:24:30.714 [2024-10-17 19:26:39.847356] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ce40, cid 4, qid 0 00:24:30.714 [2024-10-17 19:26:39.847361] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d140, cid 6, qid 0 00:24:30.714 [2024-10-17 19:26:39.847366] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d2c0, cid 7, qid 0 00:24:30.714 [2024-10-17 19:26:39.847507] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:30.714 [2024-10-17 19:26:39.847514] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:30.714 [2024-10-17 19:26:39.847518] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:30.714 [2024-10-17 19:26:39.847522] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17b8750): datao=0, datal=8192, cccid=5 00:24:30.714 [2024-10-17 19:26:39.847527] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x181cfc0) on tqpair(0x17b8750): expected_datao=0, payload_size=8192 00:24:30.714 [2024-10-17 19:26:39.847531] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.714 [2024-10-17 19:26:39.847548] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:30.714 [2024-10-17 19:26:39.847553] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:30.714 [2024-10-17 19:26:39.847560] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:30.714 [2024-10-17 19:26:39.847566] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:30.714 [2024-10-17 19:26:39.847569] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:30.714 [2024-10-17 19:26:39.847573] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17b8750): datao=0, datal=512, cccid=4 00:24:30.714 [2024-10-17 19:26:39.847578] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x181ce40) on tqpair(0x17b8750): expected_datao=0, payload_size=512 00:24:30.714 [2024-10-17 19:26:39.847583] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.714 [2024-10-17 19:26:39.847589] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:30.714 [2024-10-17 19:26:39.847593] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:30.714 [2024-10-17 19:26:39.847599] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:30.714 [2024-10-17 19:26:39.847605] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:30.714 [2024-10-17 19:26:39.847609] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:30.714 [2024-10-17 19:26:39.847612] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17b8750): datao=0, datal=512, cccid=6 00:24:30.714 [2024-10-17 19:26:39.847617] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x181d140) on tqpair(0x17b8750): expected_datao=0, payload_size=512 00:24:30.714 [2024-10-17 19:26:39.847622] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.714 [2024-10-17 19:26:39.847628] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:30.714 [2024-10-17 19:26:39.847632] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:30.714 [2024-10-17 19:26:39.847638] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:30.714 [2024-10-17 19:26:39.847644] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:30.714 [2024-10-17 19:26:39.847647] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:30.714 [2024-10-17 19:26:39.847651] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17b8750): datao=0, datal=4096, cccid=7 00:24:30.714 [2024-10-17 19:26:39.847656] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x181d2c0) on tqpair(0x17b8750): expected_datao=0, payload_size=4096 00:24:30.714 [2024-10-17 19:26:39.847660] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.714 [2024-10-17 19:26:39.847667] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:30.714 [2024-10-17 19:26:39.847671] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:30.714 [2024-10-17 19:26:39.847680] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.714 [2024-10-17 19:26:39.847687] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.714 [2024-10-17 19:26:39.847690] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.714 [2024-10-17 19:26:39.847694] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181cfc0) on tqpair=0x17b8750 00:24:30.714 [2024-10-17 19:26:39.847711] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.714 [2024-10-17 19:26:39.847718] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.714 [2024-10-17 19:26:39.847722] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.714 [2024-10-17 19:26:39.847726] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ce40) on tqpair=0x17b8750 00:24:30.714 [2024-10-17 19:26:39.847739] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.714 [2024-10-17 19:26:39.847746] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.714 [2024-10-17 19:26:39.847749] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.714 [2024-10-17 19:26:39.847753] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d140) on tqpair=0x17b8750 00:24:30.714 [2024-10-17 19:26:39.847761] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.714 [2024-10-17 19:26:39.847767] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.714 [2024-10-17 19:26:39.847771] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.714 [2024-10-17 19:26:39.847775] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d2c0) on tqpair=0x17b8750 00:24:30.714 ===================================================== 00:24:30.714 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:30.714 ===================================================== 00:24:30.714 Controller Capabilities/Features 00:24:30.714 ================================ 00:24:30.714 Vendor ID: 8086 00:24:30.714 Subsystem Vendor ID: 8086 00:24:30.714 Serial Number: SPDK00000000000001 00:24:30.714 Model Number: SPDK bdev Controller 00:24:30.714 Firmware Version: 25.01 00:24:30.714 Recommended Arb Burst: 6 00:24:30.714 IEEE OUI Identifier: e4 d2 5c 00:24:30.714 Multi-path I/O 00:24:30.714 May have multiple subsystem ports: Yes 00:24:30.714 May have multiple controllers: Yes 00:24:30.714 Associated with SR-IOV VF: No 00:24:30.714 Max Data Transfer Size: 131072 00:24:30.714 Max Number of Namespaces: 32 00:24:30.714 Max Number of I/O Queues: 127 00:24:30.714 NVMe Specification Version (VS): 1.3 00:24:30.714 NVMe Specification Version (Identify): 1.3 00:24:30.714 Maximum Queue Entries: 128 00:24:30.714 Contiguous Queues Required: Yes 00:24:30.714 Arbitration Mechanisms Supported 00:24:30.714 Weighted Round Robin: Not Supported 00:24:30.714 Vendor Specific: Not Supported 00:24:30.714 Reset Timeout: 15000 ms 00:24:30.714 Doorbell Stride: 4 bytes 00:24:30.714 NVM Subsystem Reset: Not Supported 00:24:30.714 Command Sets Supported 00:24:30.714 NVM Command Set: Supported 00:24:30.714 Boot Partition: Not Supported 00:24:30.714 Memory Page Size Minimum: 4096 bytes 00:24:30.714 Memory Page Size Maximum: 4096 bytes 00:24:30.714 Persistent Memory Region: Not Supported 00:24:30.714 Optional Asynchronous Events Supported 00:24:30.714 Namespace Attribute Notices: Supported 00:24:30.714 Firmware Activation Notices: Not Supported 00:24:30.714 ANA Change Notices: Not Supported 00:24:30.714 PLE Aggregate Log Change Notices: Not Supported 00:24:30.714 LBA Status Info Alert Notices: Not Supported 00:24:30.714 EGE Aggregate Log Change Notices: Not Supported 00:24:30.714 Normal NVM Subsystem Shutdown event: Not Supported 00:24:30.714 Zone Descriptor Change Notices: Not Supported 00:24:30.714 Discovery Log Change Notices: Not Supported 00:24:30.714 Controller Attributes 00:24:30.714 128-bit Host Identifier: Supported 00:24:30.714 Non-Operational Permissive Mode: Not Supported 00:24:30.714 NVM Sets: Not Supported 00:24:30.714 Read Recovery Levels: Not Supported 00:24:30.714 Endurance Groups: Not Supported 00:24:30.714 Predictable Latency Mode: Not Supported 00:24:30.714 Traffic Based Keep ALive: Not Supported 00:24:30.714 Namespace Granularity: Not Supported 00:24:30.714 SQ Associations: Not Supported 00:24:30.714 UUID List: Not Supported 00:24:30.714 Multi-Domain Subsystem: Not Supported 00:24:30.714 Fixed Capacity Management: Not Supported 00:24:30.714 Variable Capacity Management: Not Supported 00:24:30.715 Delete Endurance Group: Not Supported 00:24:30.715 Delete NVM Set: Not Supported 00:24:30.715 Extended LBA Formats Supported: Not Supported 00:24:30.715 Flexible Data Placement Supported: Not Supported 00:24:30.715 00:24:30.715 Controller Memory Buffer Support 00:24:30.715 ================================ 00:24:30.715 Supported: No 00:24:30.715 00:24:30.715 Persistent Memory Region Support 00:24:30.715 ================================ 00:24:30.715 Supported: No 00:24:30.715 00:24:30.715 Admin Command Set Attributes 00:24:30.715 ============================ 00:24:30.715 Security Send/Receive: Not Supported 00:24:30.715 Format NVM: Not Supported 00:24:30.715 Firmware Activate/Download: Not Supported 00:24:30.715 Namespace Management: Not Supported 00:24:30.715 Device Self-Test: Not Supported 00:24:30.715 Directives: Not Supported 00:24:30.715 NVMe-MI: Not Supported 00:24:30.715 Virtualization Management: Not Supported 00:24:30.715 Doorbell Buffer Config: Not Supported 00:24:30.715 Get LBA Status Capability: Not Supported 00:24:30.715 Command & Feature Lockdown Capability: Not Supported 00:24:30.715 Abort Command Limit: 4 00:24:30.715 Async Event Request Limit: 4 00:24:30.715 Number of Firmware Slots: N/A 00:24:30.715 Firmware Slot 1 Read-Only: N/A 00:24:30.715 Firmware Activation Without Reset: N/A 00:24:30.715 Multiple Update Detection Support: N/A 00:24:30.715 Firmware Update Granularity: No Information Provided 00:24:30.715 Per-Namespace SMART Log: No 00:24:30.715 Asymmetric Namespace Access Log Page: Not Supported 00:24:30.715 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:30.715 Command Effects Log Page: Supported 00:24:30.715 Get Log Page Extended Data: Supported 00:24:30.715 Telemetry Log Pages: Not Supported 00:24:30.715 Persistent Event Log Pages: Not Supported 00:24:30.715 Supported Log Pages Log Page: May Support 00:24:30.715 Commands Supported & Effects Log Page: Not Supported 00:24:30.715 Feature Identifiers & Effects Log Page:May Support 00:24:30.715 NVMe-MI Commands & Effects Log Page: May Support 00:24:30.715 Data Area 4 for Telemetry Log: Not Supported 00:24:30.715 Error Log Page Entries Supported: 128 00:24:30.715 Keep Alive: Supported 00:24:30.715 Keep Alive Granularity: 10000 ms 00:24:30.715 00:24:30.715 NVM Command Set Attributes 00:24:30.715 ========================== 00:24:30.715 Submission Queue Entry Size 00:24:30.715 Max: 64 00:24:30.715 Min: 64 00:24:30.715 Completion Queue Entry Size 00:24:30.715 Max: 16 00:24:30.715 Min: 16 00:24:30.715 Number of Namespaces: 32 00:24:30.715 Compare Command: Supported 00:24:30.715 Write Uncorrectable Command: Not Supported 00:24:30.715 Dataset Management Command: Supported 00:24:30.715 Write Zeroes Command: Supported 00:24:30.715 Set Features Save Field: Not Supported 00:24:30.715 Reservations: Supported 00:24:30.715 Timestamp: Not Supported 00:24:30.715 Copy: Supported 00:24:30.715 Volatile Write Cache: Present 00:24:30.715 Atomic Write Unit (Normal): 1 00:24:30.715 Atomic Write Unit (PFail): 1 00:24:30.715 Atomic Compare & Write Unit: 1 00:24:30.715 Fused Compare & Write: Supported 00:24:30.715 Scatter-Gather List 00:24:30.715 SGL Command Set: Supported 00:24:30.715 SGL Keyed: Supported 00:24:30.715 SGL Bit Bucket Descriptor: Not Supported 00:24:30.715 SGL Metadata Pointer: Not Supported 00:24:30.715 Oversized SGL: Not Supported 00:24:30.715 SGL Metadata Address: Not Supported 00:24:30.715 SGL Offset: Supported 00:24:30.715 Transport SGL Data Block: Not Supported 00:24:30.715 Replay Protected Memory Block: Not Supported 00:24:30.715 00:24:30.715 Firmware Slot Information 00:24:30.715 ========================= 00:24:30.715 Active slot: 1 00:24:30.715 Slot 1 Firmware Revision: 25.01 00:24:30.715 00:24:30.715 00:24:30.715 Commands Supported and Effects 00:24:30.715 ============================== 00:24:30.715 Admin Commands 00:24:30.715 -------------- 00:24:30.715 Get Log Page (02h): Supported 00:24:30.715 Identify (06h): Supported 00:24:30.715 Abort (08h): Supported 00:24:30.715 Set Features (09h): Supported 00:24:30.715 Get Features (0Ah): Supported 00:24:30.715 Asynchronous Event Request (0Ch): Supported 00:24:30.715 Keep Alive (18h): Supported 00:24:30.715 I/O Commands 00:24:30.715 ------------ 00:24:30.715 Flush (00h): Supported LBA-Change 00:24:30.715 Write (01h): Supported LBA-Change 00:24:30.715 Read (02h): Supported 00:24:30.715 Compare (05h): Supported 00:24:30.715 Write Zeroes (08h): Supported LBA-Change 00:24:30.715 Dataset Management (09h): Supported LBA-Change 00:24:30.715 Copy (19h): Supported LBA-Change 00:24:30.715 00:24:30.715 Error Log 00:24:30.715 ========= 00:24:30.715 00:24:30.715 Arbitration 00:24:30.715 =========== 00:24:30.715 Arbitration Burst: 1 00:24:30.715 00:24:30.715 Power Management 00:24:30.715 ================ 00:24:30.715 Number of Power States: 1 00:24:30.715 Current Power State: Power State #0 00:24:30.715 Power State #0: 00:24:30.715 Max Power: 0.00 W 00:24:30.715 Non-Operational State: Operational 00:24:30.715 Entry Latency: Not Reported 00:24:30.715 Exit Latency: Not Reported 00:24:30.715 Relative Read Throughput: 0 00:24:30.715 Relative Read Latency: 0 00:24:30.715 Relative Write Throughput: 0 00:24:30.715 Relative Write Latency: 0 00:24:30.715 Idle Power: Not Reported 00:24:30.715 Active Power: Not Reported 00:24:30.715 Non-Operational Permissive Mode: Not Supported 00:24:30.715 00:24:30.715 Health Information 00:24:30.715 ================== 00:24:30.715 Critical Warnings: 00:24:30.715 Available Spare Space: OK 00:24:30.715 Temperature: OK 00:24:30.715 Device Reliability: OK 00:24:30.715 Read Only: No 00:24:30.715 Volatile Memory Backup: OK 00:24:30.715 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:30.715 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:30.715 Available Spare: 0% 00:24:30.715 Available Spare Threshold: 0% 00:24:30.715 Life Percentage Used:[2024-10-17 19:26:39.847900] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.715 [2024-10-17 19:26:39.847908] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x17b8750) 00:24:30.715 [2024-10-17 19:26:39.847916] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.715 [2024-10-17 19:26:39.847940] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d2c0, cid 7, qid 0 00:24:30.715 [2024-10-17 19:26:39.847988] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.715 [2024-10-17 19:26:39.847996] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.715 [2024-10-17 19:26:39.848000] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.715 [2024-10-17 19:26:39.848004] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d2c0) on tqpair=0x17b8750 00:24:30.715 [2024-10-17 19:26:39.848056] nvme_ctrlr.c:4443:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:30.715 [2024-10-17 19:26:39.848070] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181c840) on tqpair=0x17b8750 00:24:30.715 [2024-10-17 19:26:39.848077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.715 [2024-10-17 19:26:39.848083] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181c9c0) on tqpair=0x17b8750 00:24:30.715 [2024-10-17 19:26:39.848088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.715 [2024-10-17 19:26:39.848093] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181cb40) on tqpair=0x17b8750 00:24:30.715 [2024-10-17 19:26:39.848098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.715 [2024-10-17 19:26:39.848104] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.715 [2024-10-17 19:26:39.848108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.715 [2024-10-17 19:26:39.848118] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.715 [2024-10-17 19:26:39.848122] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.715 [2024-10-17 19:26:39.848126] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.715 [2024-10-17 19:26:39.852158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.716 [2024-10-17 19:26:39.852191] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.716 [2024-10-17 19:26:39.852241] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.716 [2024-10-17 19:26:39.852248] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.716 [2024-10-17 19:26:39.852253] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.852257] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.716 [2024-10-17 19:26:39.852266] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.852271] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.852275] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.716 [2024-10-17 19:26:39.852283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.716 [2024-10-17 19:26:39.852310] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.716 [2024-10-17 19:26:39.852376] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.716 [2024-10-17 19:26:39.852383] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.716 [2024-10-17 19:26:39.852387] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.852391] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.716 [2024-10-17 19:26:39.852397] nvme_ctrlr.c:1193:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:30.716 [2024-10-17 19:26:39.852402] nvme_ctrlr.c:1196:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:30.716 [2024-10-17 19:26:39.852413] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.852418] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.852422] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.716 [2024-10-17 19:26:39.852430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.716 [2024-10-17 19:26:39.852448] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.716 [2024-10-17 19:26:39.852496] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.716 [2024-10-17 19:26:39.852503] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.716 [2024-10-17 19:26:39.852507] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.852512] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.716 [2024-10-17 19:26:39.852523] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.852528] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.852532] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.716 [2024-10-17 19:26:39.852540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.716 [2024-10-17 19:26:39.852557] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.716 [2024-10-17 19:26:39.852600] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.716 [2024-10-17 19:26:39.852607] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.716 [2024-10-17 19:26:39.852611] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.852615] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.716 [2024-10-17 19:26:39.852626] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.852631] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.852635] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.716 [2024-10-17 19:26:39.852642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.716 [2024-10-17 19:26:39.852660] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.716 [2024-10-17 19:26:39.852706] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.716 [2024-10-17 19:26:39.852715] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.716 [2024-10-17 19:26:39.852718] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.852722] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.716 [2024-10-17 19:26:39.852733] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.852738] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.852742] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.716 [2024-10-17 19:26:39.852750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.716 [2024-10-17 19:26:39.852768] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.716 [2024-10-17 19:26:39.852810] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.716 [2024-10-17 19:26:39.852817] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.716 [2024-10-17 19:26:39.852821] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.852825] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.716 [2024-10-17 19:26:39.852836] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.852841] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.852845] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.716 [2024-10-17 19:26:39.852852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.716 [2024-10-17 19:26:39.852870] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.716 [2024-10-17 19:26:39.852918] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.716 [2024-10-17 19:26:39.852925] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.716 [2024-10-17 19:26:39.852929] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.852933] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.716 [2024-10-17 19:26:39.852944] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.852949] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.852953] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.716 [2024-10-17 19:26:39.852961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.716 [2024-10-17 19:26:39.852978] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.716 [2024-10-17 19:26:39.853021] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.716 [2024-10-17 19:26:39.853028] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.716 [2024-10-17 19:26:39.853031] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.853036] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.716 [2024-10-17 19:26:39.853046] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.853051] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.853055] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.716 [2024-10-17 19:26:39.853063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.716 [2024-10-17 19:26:39.853081] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.716 [2024-10-17 19:26:39.853123] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.716 [2024-10-17 19:26:39.853142] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.716 [2024-10-17 19:26:39.853147] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.853152] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.716 [2024-10-17 19:26:39.853163] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.853168] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.853172] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.716 [2024-10-17 19:26:39.853180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.716 [2024-10-17 19:26:39.853199] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.716 [2024-10-17 19:26:39.853250] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.716 [2024-10-17 19:26:39.853257] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.716 [2024-10-17 19:26:39.853261] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.853265] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.716 [2024-10-17 19:26:39.853276] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.853281] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.853285] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.716 [2024-10-17 19:26:39.853292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.716 [2024-10-17 19:26:39.853310] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.716 [2024-10-17 19:26:39.853356] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.716 [2024-10-17 19:26:39.853364] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.716 [2024-10-17 19:26:39.853367] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.853372] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.716 [2024-10-17 19:26:39.853382] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.853387] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.853391] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.716 [2024-10-17 19:26:39.853399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.716 [2024-10-17 19:26:39.853427] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.716 [2024-10-17 19:26:39.853475] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.716 [2024-10-17 19:26:39.853482] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.716 [2024-10-17 19:26:39.853486] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.853491] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.716 [2024-10-17 19:26:39.853501] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.853506] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.716 [2024-10-17 19:26:39.853510] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.716 [2024-10-17 19:26:39.853518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.716 [2024-10-17 19:26:39.853535] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.716 [2024-10-17 19:26:39.853578] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.716 [2024-10-17 19:26:39.853585] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.717 [2024-10-17 19:26:39.853589] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.853593] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.717 [2024-10-17 19:26:39.853604] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.853609] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.853613] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.717 [2024-10-17 19:26:39.853620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.717 [2024-10-17 19:26:39.853638] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.717 [2024-10-17 19:26:39.853684] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.717 [2024-10-17 19:26:39.853691] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.717 [2024-10-17 19:26:39.853695] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.853699] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.717 [2024-10-17 19:26:39.853709] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.853714] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.853718] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.717 [2024-10-17 19:26:39.853726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.717 [2024-10-17 19:26:39.853743] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.717 [2024-10-17 19:26:39.853786] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.717 [2024-10-17 19:26:39.853793] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.717 [2024-10-17 19:26:39.853797] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.853812] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.717 [2024-10-17 19:26:39.853824] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.853829] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.853832] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.717 [2024-10-17 19:26:39.853840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.717 [2024-10-17 19:26:39.853859] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.717 [2024-10-17 19:26:39.853905] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.717 [2024-10-17 19:26:39.853912] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.717 [2024-10-17 19:26:39.853916] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.853921] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.717 [2024-10-17 19:26:39.853931] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.853936] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.853940] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.717 [2024-10-17 19:26:39.853948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.717 [2024-10-17 19:26:39.853966] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.717 [2024-10-17 19:26:39.854011] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.717 [2024-10-17 19:26:39.854018] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.717 [2024-10-17 19:26:39.854022] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.854026] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.717 [2024-10-17 19:26:39.854037] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.854041] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.854045] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.717 [2024-10-17 19:26:39.854053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.717 [2024-10-17 19:26:39.854071] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.717 [2024-10-17 19:26:39.854114] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.717 [2024-10-17 19:26:39.854121] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.717 [2024-10-17 19:26:39.854125] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.854140] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.717 [2024-10-17 19:26:39.854152] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.854157] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.854161] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.717 [2024-10-17 19:26:39.854169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.717 [2024-10-17 19:26:39.854188] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.717 [2024-10-17 19:26:39.854240] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.717 [2024-10-17 19:26:39.854247] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.717 [2024-10-17 19:26:39.854251] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.854255] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.717 [2024-10-17 19:26:39.854266] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.854271] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.854275] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.717 [2024-10-17 19:26:39.854282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.717 [2024-10-17 19:26:39.854300] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.717 [2024-10-17 19:26:39.854342] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.717 [2024-10-17 19:26:39.854350] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.717 [2024-10-17 19:26:39.854354] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.854358] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.717 [2024-10-17 19:26:39.854368] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.854373] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.854377] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.717 [2024-10-17 19:26:39.854385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.717 [2024-10-17 19:26:39.854403] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.717 [2024-10-17 19:26:39.854454] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.717 [2024-10-17 19:26:39.854461] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.717 [2024-10-17 19:26:39.854465] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.854469] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.717 [2024-10-17 19:26:39.854479] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.854485] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.854489] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.717 [2024-10-17 19:26:39.854497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.717 [2024-10-17 19:26:39.854515] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.717 [2024-10-17 19:26:39.854557] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.717 [2024-10-17 19:26:39.854564] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.717 [2024-10-17 19:26:39.854568] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.854572] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.717 [2024-10-17 19:26:39.854583] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.854587] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.854591] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.717 [2024-10-17 19:26:39.854599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.717 [2024-10-17 19:26:39.854617] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.717 [2024-10-17 19:26:39.854663] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.717 [2024-10-17 19:26:39.854670] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.717 [2024-10-17 19:26:39.854673] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.854678] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.717 [2024-10-17 19:26:39.854688] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.854693] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.854697] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.717 [2024-10-17 19:26:39.854704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.717 [2024-10-17 19:26:39.854722] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.717 [2024-10-17 19:26:39.854764] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.717 [2024-10-17 19:26:39.854772] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.717 [2024-10-17 19:26:39.854775] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.854780] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.717 [2024-10-17 19:26:39.854790] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.854795] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.854799] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.717 [2024-10-17 19:26:39.854806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.717 [2024-10-17 19:26:39.854824] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.717 [2024-10-17 19:26:39.854866] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.717 [2024-10-17 19:26:39.854873] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.717 [2024-10-17 19:26:39.854877] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.854881] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.717 [2024-10-17 19:26:39.854892] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.854897] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.717 [2024-10-17 19:26:39.854901] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.718 [2024-10-17 19:26:39.854908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.718 [2024-10-17 19:26:39.854926] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.718 [2024-10-17 19:26:39.854972] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.718 [2024-10-17 19:26:39.854978] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.718 [2024-10-17 19:26:39.854982] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.854986] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.718 [2024-10-17 19:26:39.854997] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.855002] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.855006] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.718 [2024-10-17 19:26:39.855013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.718 [2024-10-17 19:26:39.855031] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.718 [2024-10-17 19:26:39.855080] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.718 [2024-10-17 19:26:39.855087] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.718 [2024-10-17 19:26:39.855091] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.855095] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.718 [2024-10-17 19:26:39.855105] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.855110] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.855114] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.718 [2024-10-17 19:26:39.855122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.718 [2024-10-17 19:26:39.855151] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.718 [2024-10-17 19:26:39.855200] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.718 [2024-10-17 19:26:39.855207] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.718 [2024-10-17 19:26:39.855211] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.855216] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.718 [2024-10-17 19:26:39.855226] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.855231] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.855235] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.718 [2024-10-17 19:26:39.855243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.718 [2024-10-17 19:26:39.855261] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.718 [2024-10-17 19:26:39.855306] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.718 [2024-10-17 19:26:39.855313] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.718 [2024-10-17 19:26:39.855317] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.855322] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.718 [2024-10-17 19:26:39.855332] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.855337] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.855341] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.718 [2024-10-17 19:26:39.855349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.718 [2024-10-17 19:26:39.855367] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.718 [2024-10-17 19:26:39.855418] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.718 [2024-10-17 19:26:39.855425] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.718 [2024-10-17 19:26:39.855429] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.855433] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.718 [2024-10-17 19:26:39.855444] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.855449] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.855453] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.718 [2024-10-17 19:26:39.855460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.718 [2024-10-17 19:26:39.855478] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.718 [2024-10-17 19:26:39.855520] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.718 [2024-10-17 19:26:39.855528] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.718 [2024-10-17 19:26:39.855531] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.855535] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.718 [2024-10-17 19:26:39.855546] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.855551] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.855555] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.718 [2024-10-17 19:26:39.855562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.718 [2024-10-17 19:26:39.855580] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.718 [2024-10-17 19:26:39.855631] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.718 [2024-10-17 19:26:39.855639] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.718 [2024-10-17 19:26:39.855642] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.855647] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.718 [2024-10-17 19:26:39.855657] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.855662] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.855666] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.718 [2024-10-17 19:26:39.855674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.718 [2024-10-17 19:26:39.855691] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.718 [2024-10-17 19:26:39.855736] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.718 [2024-10-17 19:26:39.855744] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.718 [2024-10-17 19:26:39.855748] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.855752] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.718 [2024-10-17 19:26:39.855762] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.855767] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.855771] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.718 [2024-10-17 19:26:39.855779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.718 [2024-10-17 19:26:39.855796] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.718 [2024-10-17 19:26:39.855842] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.718 [2024-10-17 19:26:39.855849] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.718 [2024-10-17 19:26:39.855853] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.855857] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.718 [2024-10-17 19:26:39.855867] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.855872] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.855876] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.718 [2024-10-17 19:26:39.855884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.718 [2024-10-17 19:26:39.855901] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.718 [2024-10-17 19:26:39.855945] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.718 [2024-10-17 19:26:39.855952] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.718 [2024-10-17 19:26:39.855956] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.855960] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.718 [2024-10-17 19:26:39.855970] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.855975] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.855979] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.718 [2024-10-17 19:26:39.855987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.718 [2024-10-17 19:26:39.856004] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.718 [2024-10-17 19:26:39.856048] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.718 [2024-10-17 19:26:39.856066] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.718 [2024-10-17 19:26:39.856070] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.856074] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.718 [2024-10-17 19:26:39.856085] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.856090] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.856094] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.718 [2024-10-17 19:26:39.856101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.718 [2024-10-17 19:26:39.856119] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.718 [2024-10-17 19:26:39.860155] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.718 [2024-10-17 19:26:39.860179] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.718 [2024-10-17 19:26:39.860184] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.860188] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.718 [2024-10-17 19:26:39.860202] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.860208] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.860212] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17b8750) 00:24:30.718 [2024-10-17 19:26:39.860221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.718 [2024-10-17 19:26:39.860248] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181ccc0, cid 3, qid 0 00:24:30.718 [2024-10-17 19:26:39.860299] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:30.718 [2024-10-17 19:26:39.860306] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:30.718 [2024-10-17 19:26:39.860310] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:30.718 [2024-10-17 19:26:39.860314] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181ccc0) on tqpair=0x17b8750 00:24:30.719 [2024-10-17 19:26:39.860323] nvme_ctrlr.c:1315:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:24:30.719 0% 00:24:30.719 Data Units Read: 0 00:24:30.719 Data Units Written: 0 00:24:30.719 Host Read Commands: 0 00:24:30.719 Host Write Commands: 0 00:24:30.719 Controller Busy Time: 0 minutes 00:24:30.719 Power Cycles: 0 00:24:30.719 Power On Hours: 0 hours 00:24:30.719 Unsafe Shutdowns: 0 00:24:30.719 Unrecoverable Media Errors: 0 00:24:30.719 Lifetime Error Log Entries: 0 00:24:30.719 Warning Temperature Time: 0 minutes 00:24:30.719 Critical Temperature Time: 0 minutes 00:24:30.719 00:24:30.719 Number of Queues 00:24:30.719 ================ 00:24:30.719 Number of I/O Submission Queues: 127 00:24:30.719 Number of I/O Completion Queues: 127 00:24:30.719 00:24:30.719 Active Namespaces 00:24:30.719 ================= 00:24:30.719 Namespace ID:1 00:24:30.719 Error Recovery Timeout: Unlimited 00:24:30.719 Command Set Identifier: NVM (00h) 00:24:30.719 Deallocate: Supported 00:24:30.719 Deallocated/Unwritten Error: Not Supported 00:24:30.719 Deallocated Read Value: Unknown 00:24:30.719 Deallocate in Write Zeroes: Not Supported 00:24:30.719 Deallocated Guard Field: 0xFFFF 00:24:30.719 Flush: Supported 00:24:30.719 Reservation: Supported 00:24:30.719 Namespace Sharing Capabilities: Multiple Controllers 00:24:30.719 Size (in LBAs): 131072 (0GiB) 00:24:30.719 Capacity (in LBAs): 131072 (0GiB) 00:24:30.719 Utilization (in LBAs): 131072 (0GiB) 00:24:30.719 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:30.719 EUI64: ABCDEF0123456789 00:24:30.719 UUID: 953d9c55-e668-4e33-b4db-0e69bae327f2 00:24:30.719 Thin Provisioning: Not Supported 00:24:30.719 Per-NS Atomic Units: Yes 00:24:30.719 Atomic Boundary Size (Normal): 0 00:24:30.719 Atomic Boundary Size (PFail): 0 00:24:30.719 Atomic Boundary Offset: 0 00:24:30.719 Maximum Single Source Range Length: 65535 00:24:30.719 Maximum Copy Length: 65535 00:24:30.719 Maximum Source Range Count: 1 00:24:30.719 NGUID/EUI64 Never Reused: No 00:24:30.719 Namespace Write Protected: No 00:24:30.719 Number of LBA Formats: 1 00:24:30.719 Current LBA Format: LBA Format #00 00:24:30.719 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:30.719 00:24:30.719 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:30.719 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:30.719 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.719 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:30.719 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.719 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:30.719 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:30.719 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:30.719 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:30.719 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:30.719 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:30.719 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:30.719 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:30.719 rmmod nvme_tcp 00:24:30.719 rmmod nvme_fabrics 00:24:30.978 rmmod nvme_keyring 00:24:30.978 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:30.978 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:30.978 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:30.978 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 74195 ']' 00:24:30.978 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 74195 00:24:30.978 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 74195 ']' 00:24:30.978 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 74195 00:24:30.978 19:26:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:24:30.978 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:30.978 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74195 00:24:30.978 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:30.978 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:30.978 killing process with pid 74195 00:24:30.978 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74195' 00:24:30.978 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 74195 00:24:30.978 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 74195 00:24:31.236 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:31.236 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:31.236 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:31.236 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:31.236 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:24:31.236 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:31.236 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:24:31.236 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:31.236 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:31.236 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:31.236 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:31.236 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:31.236 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:31.236 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:31.236 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:31.236 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:31.236 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:31.236 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:31.236 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:31.236 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:31.236 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:31.236 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:31.495 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:31.495 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.495 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.495 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.495 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:24:31.495 00:24:31.495 real 0m2.368s 00:24:31.495 user 0m4.675s 00:24:31.495 sys 0m0.820s 00:24:31.495 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:31.495 ************************************ 00:24:31.495 19:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:31.495 END TEST nvmf_identify 00:24:31.495 ************************************ 00:24:31.495 19:26:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:31.495 19:26:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:31.495 19:26:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:31.495 19:26:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.495 ************************************ 00:24:31.495 START TEST nvmf_perf 00:24:31.495 ************************************ 00:24:31.495 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:31.495 * Looking for test storage... 00:24:31.495 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:31.495 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:31.495 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:31.495 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:31.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.754 --rc genhtml_branch_coverage=1 00:24:31.754 --rc genhtml_function_coverage=1 00:24:31.754 --rc genhtml_legend=1 00:24:31.754 --rc geninfo_all_blocks=1 00:24:31.754 --rc geninfo_unexecuted_blocks=1 00:24:31.754 00:24:31.754 ' 00:24:31.754 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:31.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.754 --rc genhtml_branch_coverage=1 00:24:31.754 --rc genhtml_function_coverage=1 00:24:31.754 --rc genhtml_legend=1 00:24:31.754 --rc geninfo_all_blocks=1 00:24:31.754 --rc geninfo_unexecuted_blocks=1 00:24:31.754 00:24:31.754 ' 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:31.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.755 --rc genhtml_branch_coverage=1 00:24:31.755 --rc genhtml_function_coverage=1 00:24:31.755 --rc genhtml_legend=1 00:24:31.755 --rc geninfo_all_blocks=1 00:24:31.755 --rc geninfo_unexecuted_blocks=1 00:24:31.755 00:24:31.755 ' 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:31.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.755 --rc genhtml_branch_coverage=1 00:24:31.755 --rc genhtml_function_coverage=1 00:24:31.755 --rc genhtml_legend=1 00:24:31.755 --rc geninfo_all_blocks=1 00:24:31.755 --rc geninfo_unexecuted_blocks=1 00:24:31.755 00:24:31.755 ' 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:31.755 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # nvmf_veth_init 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:31.755 Cannot find device "nvmf_init_br" 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:31.755 Cannot find device "nvmf_init_br2" 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:31.755 Cannot find device "nvmf_tgt_br" 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:31.755 Cannot find device "nvmf_tgt_br2" 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:31.755 Cannot find device "nvmf_init_br" 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:31.755 Cannot find device "nvmf_init_br2" 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:31.755 Cannot find device "nvmf_tgt_br" 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:31.755 Cannot find device "nvmf_tgt_br2" 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:31.755 Cannot find device "nvmf_br" 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:31.755 Cannot find device "nvmf_init_if" 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:31.755 Cannot find device "nvmf_init_if2" 00:24:31.755 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:24:31.756 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:31.756 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:31.756 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:24:31.756 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:31.756 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:31.756 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:24:31.756 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:31.756 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:31.756 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:31.756 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:31.756 19:26:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:31.756 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:32.014 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:32.014 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:32.014 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:32.014 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:32.014 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:32.014 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:32.014 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:32.014 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:32.014 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:32.014 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:32.014 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:32.014 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:32.014 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:32.014 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:32.014 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:32.014 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:32.014 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:32.014 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:32.014 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:32.014 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:32.014 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:32.014 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:32.014 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:32.014 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:32.014 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:32.014 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:32.014 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:32.014 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:32.014 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:24:32.014 00:24:32.014 --- 10.0.0.3 ping statistics --- 00:24:32.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.014 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:24:32.014 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:32.014 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:32.014 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:24:32.014 00:24:32.014 --- 10.0.0.4 ping statistics --- 00:24:32.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.014 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:24:32.015 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:32.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:24:32.015 00:24:32.015 --- 10.0.0.1 ping statistics --- 00:24:32.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.015 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:24:32.015 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:32.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:24:32.015 00:24:32.015 --- 10.0.0.2 ping statistics --- 00:24:32.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.015 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:24:32.015 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.015 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # return 0 00:24:32.015 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:32.015 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.015 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:32.015 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:32.015 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.015 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:32.015 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:32.015 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:32.015 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:32.015 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:32.015 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:32.015 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=74444 00:24:32.015 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 74444 00:24:32.015 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:32.015 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 74444 ']' 00:24:32.015 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.015 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:32.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.015 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.015 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:32.015 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:32.277 [2024-10-17 19:26:41.302032] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:24:32.277 [2024-10-17 19:26:41.302188] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.277 [2024-10-17 19:26:41.445892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:32.277 [2024-10-17 19:26:41.515923] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.277 [2024-10-17 19:26:41.516265] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.277 [2024-10-17 19:26:41.516416] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.277 [2024-10-17 19:26:41.516548] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.277 [2024-10-17 19:26:41.516642] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.277 [2024-10-17 19:26:41.517980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.277 [2024-10-17 19:26:41.518085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:32.277 [2024-10-17 19:26:41.518189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:32.277 [2024-10-17 19:26:41.518237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.535 [2024-10-17 19:26:41.578829] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:32.535 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:32.535 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:24:32.535 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:32.535 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:32.535 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:32.535 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.535 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:24:32.535 19:26:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:24:33.101 19:26:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:24:33.101 19:26:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:33.359 19:26:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:24:33.359 19:26:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:33.617 19:26:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:33.617 19:26:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:24:33.617 19:26:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:33.617 19:26:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:33.617 19:26:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:33.874 [2024-10-17 19:26:43.104949] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.131 19:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:34.389 19:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:34.389 19:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:34.647 19:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:34.647 19:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:34.916 19:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:35.198 [2024-10-17 19:26:44.243688] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:35.198 19:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:24:35.455 19:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:24:35.455 19:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:24:35.455 19:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:35.455 19:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:24:36.389 Initializing NVMe Controllers 00:24:36.389 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:24:36.389 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:24:36.389 Initialization complete. Launching workers. 00:24:36.389 ======================================================== 00:24:36.389 Latency(us) 00:24:36.389 Device Information : IOPS MiB/s Average min max 00:24:36.389 PCIE (0000:00:10.0) NSID 1 from core 0: 24384.00 95.25 1311.81 340.47 5199.77 00:24:36.389 ======================================================== 00:24:36.389 Total : 24384.00 95.25 1311.81 340.47 5199.77 00:24:36.389 00:24:36.389 19:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:24:37.765 Initializing NVMe Controllers 00:24:37.765 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:37.765 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:37.765 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:37.765 Initialization complete. Launching workers. 00:24:37.765 ======================================================== 00:24:37.765 Latency(us) 00:24:37.765 Device Information : IOPS MiB/s Average min max 00:24:37.765 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3256.97 12.72 305.51 114.87 7112.44 00:24:37.765 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8112.11 5034.65 12022.53 00:24:37.765 ======================================================== 00:24:37.765 Total : 3380.97 13.21 591.82 114.87 12022.53 00:24:37.765 00:24:37.765 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:24:39.140 Initializing NVMe Controllers 00:24:39.140 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:39.140 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:39.140 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:39.140 Initialization complete. Launching workers. 00:24:39.140 ======================================================== 00:24:39.140 Latency(us) 00:24:39.140 Device Information : IOPS MiB/s Average min max 00:24:39.140 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8364.02 32.67 3830.99 714.17 7824.59 00:24:39.140 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4004.61 15.64 8001.11 6754.66 9287.71 00:24:39.140 ======================================================== 00:24:39.140 Total : 12368.63 48.31 5181.16 714.17 9287.71 00:24:39.140 00:24:39.398 19:26:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:24:39.398 19:26:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:24:41.953 Initializing NVMe Controllers 00:24:41.953 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:41.953 Controller IO queue size 128, less than required. 00:24:41.953 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:41.953 Controller IO queue size 128, less than required. 00:24:41.953 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:41.953 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:41.953 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:41.953 Initialization complete. Launching workers. 00:24:41.953 ======================================================== 00:24:41.953 Latency(us) 00:24:41.953 Device Information : IOPS MiB/s Average min max 00:24:41.953 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1634.21 408.55 79417.02 45470.29 125279.28 00:24:41.953 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 632.42 158.10 207096.94 75906.86 343319.58 00:24:41.953 ======================================================== 00:24:41.953 Total : 2266.63 566.66 115041.43 45470.29 343319.58 00:24:41.953 00:24:41.953 19:26:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:24:41.953 Initializing NVMe Controllers 00:24:41.953 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:41.953 Controller IO queue size 128, less than required. 00:24:41.953 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:41.953 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:41.953 Controller IO queue size 128, less than required. 00:24:41.953 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:41.953 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:24:41.953 WARNING: Some requested NVMe devices were skipped 00:24:41.953 No valid NVMe controllers or AIO or URING devices found 00:24:41.953 19:26:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:24:44.503 Initializing NVMe Controllers 00:24:44.503 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:44.503 Controller IO queue size 128, less than required. 00:24:44.503 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:44.503 Controller IO queue size 128, less than required. 00:24:44.503 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:44.503 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:44.503 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:44.503 Initialization complete. Launching workers. 00:24:44.503 00:24:44.503 ==================== 00:24:44.503 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:44.503 TCP transport: 00:24:44.503 polls: 8796 00:24:44.503 idle_polls: 5728 00:24:44.503 sock_completions: 3068 00:24:44.503 nvme_completions: 5367 00:24:44.503 submitted_requests: 8034 00:24:44.503 queued_requests: 1 00:24:44.503 00:24:44.503 ==================== 00:24:44.503 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:44.503 TCP transport: 00:24:44.503 polls: 8914 00:24:44.503 idle_polls: 5381 00:24:44.503 sock_completions: 3533 00:24:44.503 nvme_completions: 5975 00:24:44.503 submitted_requests: 9004 00:24:44.503 queued_requests: 1 00:24:44.503 ======================================================== 00:24:44.503 Latency(us) 00:24:44.503 Device Information : IOPS MiB/s Average min max 00:24:44.503 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1338.29 334.57 98786.02 44585.58 176872.76 00:24:44.503 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1489.92 372.48 86575.44 33991.90 127982.51 00:24:44.503 ======================================================== 00:24:44.503 Total : 2828.21 707.05 92353.39 33991.90 176872.76 00:24:44.503 00:24:44.503 19:26:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:44.503 19:26:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:45.070 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:45.070 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:45.070 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:45.070 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:45.070 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:45.070 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:45.070 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:45.070 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:45.070 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:45.070 rmmod nvme_tcp 00:24:45.070 rmmod nvme_fabrics 00:24:45.070 rmmod nvme_keyring 00:24:45.070 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:45.070 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:45.070 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:45.070 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 74444 ']' 00:24:45.070 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 74444 00:24:45.070 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 74444 ']' 00:24:45.070 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 74444 00:24:45.070 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:24:45.070 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:45.070 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74444 00:24:45.070 killing process with pid 74444 00:24:45.070 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:45.070 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:45.070 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74444' 00:24:45.070 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 74444 00:24:45.070 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 74444 00:24:45.636 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:45.636 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:45.636 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:45.636 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:45.636 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:24:45.636 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:45.636 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:24:45.636 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:45.636 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:45.636 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:45.898 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:45.898 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:45.898 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:45.898 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:45.898 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:45.898 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:45.898 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:45.898 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:45.898 19:26:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:45.898 19:26:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:45.898 19:26:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:45.898 19:26:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:45.898 19:26:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:45.898 19:26:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.898 19:26:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.898 19:26:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.898 19:26:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:24:45.898 00:24:45.898 real 0m14.532s 00:24:45.898 user 0m52.693s 00:24:45.898 sys 0m4.090s 00:24:45.898 19:26:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:45.898 19:26:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:45.898 ************************************ 00:24:45.898 END TEST nvmf_perf 00:24:45.898 ************************************ 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.156 ************************************ 00:24:46.156 START TEST nvmf_fio_host 00:24:46.156 ************************************ 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:46.156 * Looking for test storage... 00:24:46.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:46.156 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:46.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.415 --rc genhtml_branch_coverage=1 00:24:46.415 --rc genhtml_function_coverage=1 00:24:46.415 --rc genhtml_legend=1 00:24:46.415 --rc geninfo_all_blocks=1 00:24:46.415 --rc geninfo_unexecuted_blocks=1 00:24:46.415 00:24:46.415 ' 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:46.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.415 --rc genhtml_branch_coverage=1 00:24:46.415 --rc genhtml_function_coverage=1 00:24:46.415 --rc genhtml_legend=1 00:24:46.415 --rc geninfo_all_blocks=1 00:24:46.415 --rc geninfo_unexecuted_blocks=1 00:24:46.415 00:24:46.415 ' 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:46.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.415 --rc genhtml_branch_coverage=1 00:24:46.415 --rc genhtml_function_coverage=1 00:24:46.415 --rc genhtml_legend=1 00:24:46.415 --rc geninfo_all_blocks=1 00:24:46.415 --rc geninfo_unexecuted_blocks=1 00:24:46.415 00:24:46.415 ' 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:46.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.415 --rc genhtml_branch_coverage=1 00:24:46.415 --rc genhtml_function_coverage=1 00:24:46.415 --rc genhtml_legend=1 00:24:46.415 --rc geninfo_all_blocks=1 00:24:46.415 --rc geninfo_unexecuted_blocks=1 00:24:46.415 00:24:46.415 ' 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:46.415 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:46.416 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # nvmf_veth_init 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:46.416 Cannot find device "nvmf_init_br" 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:46.416 Cannot find device "nvmf_init_br2" 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:46.416 Cannot find device "nvmf_tgt_br" 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:46.416 Cannot find device "nvmf_tgt_br2" 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:46.416 Cannot find device "nvmf_init_br" 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:46.416 Cannot find device "nvmf_init_br2" 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:46.416 Cannot find device "nvmf_tgt_br" 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:46.416 Cannot find device "nvmf_tgt_br2" 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:46.416 Cannot find device "nvmf_br" 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:46.416 Cannot find device "nvmf_init_if" 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:46.416 Cannot find device "nvmf_init_if2" 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:46.416 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:46.416 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:46.416 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:46.679 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:46.679 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:46.679 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:46.679 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:46.679 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:46.679 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:46.679 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:46.679 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:46.679 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:46.679 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:46.679 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:46.679 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:46.679 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:46.679 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:46.679 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:46.679 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:46.679 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:46.679 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:46.679 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:46.679 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:46.679 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:46.679 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:46.679 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:46.679 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:46.679 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:46.679 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:46.679 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:46.679 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.110 ms 00:24:46.679 00:24:46.679 --- 10.0.0.3 ping statistics --- 00:24:46.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.679 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:24:46.679 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:46.679 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:46.679 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.096 ms 00:24:46.679 00:24:46.679 --- 10.0.0.4 ping statistics --- 00:24:46.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.679 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:24:46.679 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:46.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:46.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:24:46.680 00:24:46.680 --- 10.0.0.1 ping statistics --- 00:24:46.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.680 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:24:46.680 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:46.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:46.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:24:46.680 00:24:46.680 --- 10.0.0.2 ping statistics --- 00:24:46.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.680 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:24:46.680 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:46.680 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # return 0 00:24:46.680 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:46.680 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:46.680 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:46.680 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:46.680 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:46.680 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:46.680 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:46.680 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:46.680 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:46.680 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:46.680 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.680 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74916 00:24:46.680 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:46.680 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:46.680 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74916 00:24:46.680 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 74916 ']' 00:24:46.680 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.680 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:46.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.680 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.680 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:46.680 19:26:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.954 [2024-10-17 19:26:55.948442] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:24:46.954 [2024-10-17 19:26:55.948733] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.954 [2024-10-17 19:26:56.089329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:46.954 [2024-10-17 19:26:56.165039] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.954 [2024-10-17 19:26:56.165652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.954 [2024-10-17 19:26:56.165962] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.954 [2024-10-17 19:26:56.166253] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.954 [2024-10-17 19:26:56.166475] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.954 [2024-10-17 19:26:56.168025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.954 [2024-10-17 19:26:56.168204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:46.954 [2024-10-17 19:26:56.168273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:46.954 [2024-10-17 19:26:56.168279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.212 [2024-10-17 19:26:56.242348] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:47.212 19:26:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:47.212 19:26:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:24:47.212 19:26:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:47.470 [2024-10-17 19:26:56.625500] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.470 19:26:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:47.470 19:26:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:47.470 19:26:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.470 19:26:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:48.035 Malloc1 00:24:48.036 19:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:48.292 19:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:48.550 19:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:48.807 [2024-10-17 19:26:57.941259] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:48.807 19:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:24:49.066 19:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:24:49.066 19:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:24:49.066 19:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:24:49.066 19:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:49.066 19:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:49.066 19:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:49.066 19:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:49.066 19:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:49.066 19:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:49.066 19:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:49.066 19:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:49.066 19:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:49.066 19:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:49.066 19:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:49.066 19:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:49.066 19:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:49.066 19:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:49.066 19:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:49.066 19:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:49.325 19:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:49.325 19:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:49.325 19:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:24:49.325 19:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:24:49.325 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:49.325 fio-3.35 00:24:49.325 Starting 1 thread 00:24:51.856 00:24:51.856 test: (groupid=0, jobs=1): err= 0: pid=74987: Thu Oct 17 19:27:00 2024 00:24:51.856 read: IOPS=8813, BW=34.4MiB/s (36.1MB/s)(69.1MiB/2006msec) 00:24:51.856 slat (nsec): min=1951, max=244857, avg=2671.18, stdev=2485.45 00:24:51.856 clat (usec): min=1726, max=13617, avg=7552.73, stdev=566.06 00:24:51.856 lat (usec): min=1782, max=13619, avg=7555.40, stdev=565.75 00:24:51.856 clat percentiles (usec): 00:24:51.856 | 1.00th=[ 6456], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7177], 00:24:51.856 | 30.00th=[ 7308], 40.00th=[ 7439], 50.00th=[ 7504], 60.00th=[ 7635], 00:24:51.856 | 70.00th=[ 7767], 80.00th=[ 7963], 90.00th=[ 8160], 95.00th=[ 8455], 00:24:51.856 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[12518], 99.95th=[13173], 00:24:51.856 | 99.99th=[13566] 00:24:51.856 bw ( KiB/s): min=34424, max=35696, per=99.94%, avg=35232.00, stdev=566.33, samples=4 00:24:51.856 iops : min= 8606, max= 8924, avg=8808.00, stdev=141.58, samples=4 00:24:51.856 write: IOPS=8826, BW=34.5MiB/s (36.2MB/s)(69.2MiB/2006msec); 0 zone resets 00:24:51.856 slat (usec): min=2, max=157, avg= 2.80, stdev= 1.69 00:24:51.856 clat (usec): min=1605, max=13264, avg=6903.43, stdev=521.16 00:24:51.856 lat (usec): min=1620, max=13266, avg=6906.23, stdev=521.00 00:24:51.856 clat percentiles (usec): 00:24:51.856 | 1.00th=[ 5932], 5.00th=[ 6194], 10.00th=[ 6325], 20.00th=[ 6521], 00:24:51.856 | 30.00th=[ 6652], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 6980], 00:24:51.856 | 70.00th=[ 7111], 80.00th=[ 7242], 90.00th=[ 7439], 95.00th=[ 7701], 00:24:51.856 | 99.00th=[ 8225], 99.50th=[ 8455], 99.90th=[10945], 99.95th=[11731], 00:24:51.856 | 99.99th=[13173] 00:24:51.856 bw ( KiB/s): min=34728, max=35648, per=99.94%, avg=35282.00, stdev=414.32, samples=4 00:24:51.856 iops : min= 8682, max= 8912, avg=8820.50, stdev=103.58, samples=4 00:24:51.856 lat (msec) : 2=0.04%, 4=0.12%, 10=99.66%, 20=0.19% 00:24:51.856 cpu : usr=68.93%, sys=23.04%, ctx=32, majf=0, minf=6 00:24:51.856 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:51.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:51.856 issued rwts: total=17679,17705,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.856 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:51.856 00:24:51.856 Run status group 0 (all jobs): 00:24:51.856 READ: bw=34.4MiB/s (36.1MB/s), 34.4MiB/s-34.4MiB/s (36.1MB/s-36.1MB/s), io=69.1MiB (72.4MB), run=2006-2006msec 00:24:51.856 WRITE: bw=34.5MiB/s (36.2MB/s), 34.5MiB/s-34.5MiB/s (36.2MB/s-36.2MB/s), io=69.2MiB (72.5MB), run=2006-2006msec 00:24:51.856 19:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:24:51.856 19:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:24:51.856 19:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:51.856 19:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:51.856 19:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:51.856 19:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:51.856 19:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:51.856 19:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:51.856 19:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:51.856 19:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:51.856 19:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:51.856 19:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:51.856 19:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:51.856 19:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:51.856 19:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:51.856 19:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:51.856 19:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:51.856 19:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:51.856 19:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:51.856 19:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:51.856 19:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:24:51.856 19:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:24:51.856 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:51.856 fio-3.35 00:24:51.856 Starting 1 thread 00:24:54.383 00:24:54.383 test: (groupid=0, jobs=1): err= 0: pid=75036: Thu Oct 17 19:27:03 2024 00:24:54.383 read: IOPS=8132, BW=127MiB/s (133MB/s)(255MiB/2010msec) 00:24:54.383 slat (usec): min=3, max=114, avg= 3.86, stdev= 1.91 00:24:54.383 clat (usec): min=2094, max=17898, avg=8720.80, stdev=2751.03 00:24:54.383 lat (usec): min=2097, max=17902, avg=8724.66, stdev=2751.06 00:24:54.383 clat percentiles (usec): 00:24:54.383 | 1.00th=[ 4047], 5.00th=[ 4817], 10.00th=[ 5342], 20.00th=[ 6194], 00:24:54.383 | 30.00th=[ 6980], 40.00th=[ 7701], 50.00th=[ 8455], 60.00th=[ 9241], 00:24:54.383 | 70.00th=[10159], 80.00th=[11076], 90.00th=[12518], 95.00th=[13435], 00:24:54.383 | 99.00th=[16188], 99.50th=[16909], 99.90th=[17433], 99.95th=[17695], 00:24:54.383 | 99.99th=[17695] 00:24:54.383 bw ( KiB/s): min=60928, max=73216, per=50.62%, avg=65864.00, stdev=5323.96, samples=4 00:24:54.383 iops : min= 3808, max= 4576, avg=4116.50, stdev=332.75, samples=4 00:24:54.383 write: IOPS=4598, BW=71.8MiB/s (75.3MB/s)(134MiB/1870msec); 0 zone resets 00:24:54.383 slat (usec): min=33, max=315, avg=39.11, stdev= 7.23 00:24:54.383 clat (usec): min=6637, max=21188, avg=12397.53, stdev=2206.98 00:24:54.383 lat (usec): min=6675, max=21253, avg=12436.64, stdev=2207.52 00:24:54.383 clat percentiles (usec): 00:24:54.383 | 1.00th=[ 8291], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10421], 00:24:54.383 | 30.00th=[11076], 40.00th=[11600], 50.00th=[12125], 60.00th=[12780], 00:24:54.383 | 70.00th=[13435], 80.00th=[14222], 90.00th=[15401], 95.00th=[16581], 00:24:54.383 | 99.00th=[18220], 99.50th=[18744], 99.90th=[20579], 99.95th=[20841], 00:24:54.383 | 99.99th=[21103] 00:24:54.383 bw ( KiB/s): min=63008, max=76608, per=93.09%, avg=68488.00, stdev=6098.97, samples=4 00:24:54.383 iops : min= 3938, max= 4788, avg=4280.50, stdev=381.19, samples=4 00:24:54.383 lat (msec) : 4=0.57%, 10=48.37%, 20=51.00%, 50=0.07% 00:24:54.383 cpu : usr=84.57%, sys=11.45%, ctx=6, majf=0, minf=2 00:24:54.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:24:54.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:54.383 issued rwts: total=16346,8599,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.383 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:54.383 00:24:54.383 Run status group 0 (all jobs): 00:24:54.383 READ: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=255MiB (268MB), run=2010-2010msec 00:24:54.383 WRITE: bw=71.8MiB/s (75.3MB/s), 71.8MiB/s-71.8MiB/s (75.3MB/s-75.3MB/s), io=134MiB (141MB), run=1870-1870msec 00:24:54.383 19:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:54.383 19:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:54.383 19:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:54.383 19:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:54.383 19:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:54.383 19:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:54.383 19:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:54.652 19:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:54.652 19:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:54.652 19:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:54.652 19:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:54.652 rmmod nvme_tcp 00:24:54.652 rmmod nvme_fabrics 00:24:54.652 rmmod nvme_keyring 00:24:54.652 19:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:54.652 19:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:54.652 19:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:54.652 19:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 74916 ']' 00:24:54.652 19:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 74916 00:24:54.652 19:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 74916 ']' 00:24:54.652 19:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 74916 00:24:54.652 19:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:24:54.652 19:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:54.652 19:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74916 00:24:54.652 killing process with pid 74916 00:24:54.652 19:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:54.652 19:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:54.652 19:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74916' 00:24:54.652 19:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 74916 00:24:54.652 19:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 74916 00:24:54.908 19:27:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:54.908 19:27:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:54.908 19:27:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:54.908 19:27:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:54.908 19:27:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:54.908 19:27:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:24:54.909 19:27:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:24:54.909 19:27:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:54.909 19:27:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:54.909 19:27:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:54.909 19:27:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:54.909 19:27:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:54.909 19:27:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:55.166 19:27:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:55.166 19:27:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:55.166 19:27:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:55.166 19:27:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:55.166 19:27:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:55.166 19:27:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:55.166 19:27:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:55.166 19:27:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:55.166 19:27:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:55.166 19:27:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:55.166 19:27:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.166 19:27:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:55.166 19:27:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.166 19:27:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:24:55.166 00:24:55.166 real 0m9.170s 00:24:55.166 user 0m36.215s 00:24:55.166 sys 0m2.515s 00:24:55.166 19:27:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:55.166 19:27:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.166 ************************************ 00:24:55.166 END TEST nvmf_fio_host 00:24:55.166 ************************************ 00:24:55.166 19:27:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:55.166 19:27:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:55.166 19:27:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:55.166 19:27:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.166 ************************************ 00:24:55.166 START TEST nvmf_failover 00:24:55.166 ************************************ 00:24:55.166 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:55.425 * Looking for test storage... 00:24:55.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:55.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.425 --rc genhtml_branch_coverage=1 00:24:55.425 --rc genhtml_function_coverage=1 00:24:55.425 --rc genhtml_legend=1 00:24:55.425 --rc geninfo_all_blocks=1 00:24:55.425 --rc geninfo_unexecuted_blocks=1 00:24:55.425 00:24:55.425 ' 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:55.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.425 --rc genhtml_branch_coverage=1 00:24:55.425 --rc genhtml_function_coverage=1 00:24:55.425 --rc genhtml_legend=1 00:24:55.425 --rc geninfo_all_blocks=1 00:24:55.425 --rc geninfo_unexecuted_blocks=1 00:24:55.425 00:24:55.425 ' 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:55.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.425 --rc genhtml_branch_coverage=1 00:24:55.425 --rc genhtml_function_coverage=1 00:24:55.425 --rc genhtml_legend=1 00:24:55.425 --rc geninfo_all_blocks=1 00:24:55.425 --rc geninfo_unexecuted_blocks=1 00:24:55.425 00:24:55.425 ' 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:55.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.425 --rc genhtml_branch_coverage=1 00:24:55.425 --rc genhtml_function_coverage=1 00:24:55.425 --rc genhtml_legend=1 00:24:55.425 --rc geninfo_all_blocks=1 00:24:55.425 --rc geninfo_unexecuted_blocks=1 00:24:55.425 00:24:55.425 ' 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:55.425 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:55.426 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # nvmf_veth_init 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:55.426 Cannot find device "nvmf_init_br" 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:55.426 Cannot find device "nvmf_init_br2" 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:55.426 Cannot find device "nvmf_tgt_br" 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:24:55.426 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:55.684 Cannot find device "nvmf_tgt_br2" 00:24:55.684 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:24:55.684 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:55.684 Cannot find device "nvmf_init_br" 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:55.685 Cannot find device "nvmf_init_br2" 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:55.685 Cannot find device "nvmf_tgt_br" 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:55.685 Cannot find device "nvmf_tgt_br2" 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:55.685 Cannot find device "nvmf_br" 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:55.685 Cannot find device "nvmf_init_if" 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:55.685 Cannot find device "nvmf_init_if2" 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:55.685 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:55.685 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:55.685 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:55.944 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:55.944 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:55.944 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:55.944 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:55.944 19:27:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:55.944 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:55.944 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:24:55.944 00:24:55.944 --- 10.0.0.3 ping statistics --- 00:24:55.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.944 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:55.944 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:55.944 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:24:55.944 00:24:55.944 --- 10.0.0.4 ping statistics --- 00:24:55.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.944 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:55.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:55.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:24:55.944 00:24:55.944 --- 10.0.0.1 ping statistics --- 00:24:55.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.944 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:55.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:55.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:24:55.944 00:24:55.944 --- 10.0.0.2 ping statistics --- 00:24:55.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.944 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # return 0 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=75303 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 75303 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75303 ']' 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:55.944 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:55.944 [2024-10-17 19:27:05.136778] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:24:55.944 [2024-10-17 19:27:05.136899] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:56.202 [2024-10-17 19:27:05.275761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:56.202 [2024-10-17 19:27:05.358304] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:56.202 [2024-10-17 19:27:05.358551] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:56.202 [2024-10-17 19:27:05.358719] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:56.202 [2024-10-17 19:27:05.358876] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:56.202 [2024-10-17 19:27:05.358920] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:56.202 [2024-10-17 19:27:05.360546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:56.202 [2024-10-17 19:27:05.360634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:56.202 [2024-10-17 19:27:05.360639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.202 [2024-10-17 19:27:05.441091] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:56.460 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:56.460 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:56.460 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:56.460 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:56.460 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:56.460 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:56.460 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:56.717 [2024-10-17 19:27:05.846181] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.717 19:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:56.975 Malloc0 00:24:56.975 19:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:57.232 19:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:57.491 19:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:57.749 [2024-10-17 19:27:06.956320] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:57.749 19:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:24:58.315 [2024-10-17 19:27:07.276498] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:24:58.315 19:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:24:58.572 [2024-10-17 19:27:07.576853] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:24:58.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:58.572 19:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75353 00:24:58.572 19:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:58.572 19:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:58.572 19:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75353 /var/tmp/bdevperf.sock 00:24:58.572 19:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75353 ']' 00:24:58.572 19:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:58.572 19:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:58.572 19:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:58.572 19:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:58.572 19:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:59.505 19:27:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:59.505 19:27:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:59.505 19:27:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:00.072 NVMe0n1 00:25:00.072 19:27:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:00.330 00:25:00.330 19:27:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75382 00:25:00.330 19:27:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:00.330 19:27:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:01.328 19:27:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:01.586 [2024-10-17 19:27:10.786210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f66340 is same with the state(6) to be set 00:25:01.586 [2024-10-17 19:27:10.786499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f66340 is same with the state(6) to be set 00:25:01.586 [2024-10-17 19:27:10.786515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f66340 is same with the state(6) to be set 00:25:01.586 [2024-10-17 19:27:10.786525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f66340 is same with the state(6) to be set 00:25:01.586 19:27:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:04.879 19:27:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:05.137 00:25:05.137 19:27:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:25:05.394 19:27:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:08.677 19:27:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:08.677 [2024-10-17 19:27:17.850117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:08.677 19:27:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:10.053 19:27:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:25:10.053 19:27:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75382 00:25:16.616 { 00:25:16.616 "results": [ 00:25:16.616 { 00:25:16.616 "job": "NVMe0n1", 00:25:16.616 "core_mask": "0x1", 00:25:16.616 "workload": "verify", 00:25:16.616 "status": "finished", 00:25:16.616 "verify_range": { 00:25:16.616 "start": 0, 00:25:16.616 "length": 16384 00:25:16.616 }, 00:25:16.616 "queue_depth": 128, 00:25:16.616 "io_size": 4096, 00:25:16.616 "runtime": 15.012193, 00:25:16.616 "iops": 8279.73634498304, 00:25:16.616 "mibps": 32.34272009759, 00:25:16.616 "io_failed": 3189, 00:25:16.616 "io_timeout": 0, 00:25:16.616 "avg_latency_us": 15037.305123842476, 00:25:16.616 "min_latency_us": 580.8872727272727, 00:25:16.616 "max_latency_us": 17277.672727272726 00:25:16.616 } 00:25:16.616 ], 00:25:16.616 "core_count": 1 00:25:16.616 } 00:25:16.616 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75353 00:25:16.616 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75353 ']' 00:25:16.616 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75353 00:25:16.616 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:16.617 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:16.617 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75353 00:25:16.617 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:16.617 killing process with pid 75353 00:25:16.617 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:16.617 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75353' 00:25:16.617 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75353 00:25:16.617 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75353 00:25:16.617 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:16.617 [2024-10-17 19:27:07.662007] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:25:16.617 [2024-10-17 19:27:07.662156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75353 ] 00:25:16.617 [2024-10-17 19:27:07.806635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.617 [2024-10-17 19:27:07.894930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.617 [2024-10-17 19:27:07.973256] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:16.617 Running I/O for 15 seconds... 00:25:16.617 8553.00 IOPS, 33.41 MiB/s [2024-10-17T19:27:25.875Z] [2024-10-17 19:27:10.786672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.617 [2024-10-17 19:27:10.786751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.786782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.617 [2024-10-17 19:27:10.786801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.786819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.617 [2024-10-17 19:27:10.786835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.786852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.617 [2024-10-17 19:27:10.786867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.786883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.617 [2024-10-17 19:27:10.786898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.786915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.617 [2024-10-17 19:27:10.786930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.786947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.617 [2024-10-17 19:27:10.786962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.786978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.617 [2024-10-17 19:27:10.786993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.787010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.617 [2024-10-17 19:27:10.787025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.787041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.617 [2024-10-17 19:27:10.787057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.787073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.617 [2024-10-17 19:27:10.787142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.787164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.617 [2024-10-17 19:27:10.787179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.787196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.617 [2024-10-17 19:27:10.787212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.787228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.617 [2024-10-17 19:27:10.787243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.787261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.617 [2024-10-17 19:27:10.787276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.787292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.617 [2024-10-17 19:27:10.787308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.787323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.617 [2024-10-17 19:27:10.787338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.787356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.617 [2024-10-17 19:27:10.787371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.787387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.617 [2024-10-17 19:27:10.787402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.787418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.617 [2024-10-17 19:27:10.787434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.787451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.617 [2024-10-17 19:27:10.787466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.787482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.617 [2024-10-17 19:27:10.787497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.787513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.617 [2024-10-17 19:27:10.787528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.787554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.617 [2024-10-17 19:27:10.787570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.787593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.617 [2024-10-17 19:27:10.787607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.787624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.617 [2024-10-17 19:27:10.787639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.787655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.617 [2024-10-17 19:27:10.787671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.787687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.617 [2024-10-17 19:27:10.787702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.787727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.617 [2024-10-17 19:27:10.787754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.787773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.617 [2024-10-17 19:27:10.787789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.787806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.617 [2024-10-17 19:27:10.787821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.787837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.617 [2024-10-17 19:27:10.787853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.787869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.617 [2024-10-17 19:27:10.787884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.787901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.617 [2024-10-17 19:27:10.787917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.787933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.617 [2024-10-17 19:27:10.787948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.617 [2024-10-17 19:27:10.787964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.787992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.788028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.788060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.788092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.788124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.788173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.788205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.788246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.788278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.788309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.788341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.788373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.788405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.788444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.618 [2024-10-17 19:27:10.788477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.618 [2024-10-17 19:27:10.788508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.618 [2024-10-17 19:27:10.788540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.618 [2024-10-17 19:27:10.788572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.618 [2024-10-17 19:27:10.788604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.618 [2024-10-17 19:27:10.788635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.618 [2024-10-17 19:27:10.788666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.618 [2024-10-17 19:27:10.788698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.788729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.788761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.788793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.788824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.788864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.788896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.788926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.788958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.788974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.788989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.789005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.789020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.789036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.789052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.789068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.789083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.789100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.789116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.789148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.789166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.789183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.789198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.789214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.789229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.789246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.789268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.789286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.618 [2024-10-17 19:27:10.789301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.789317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.618 [2024-10-17 19:27:10.789333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.789349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.618 [2024-10-17 19:27:10.789364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.618 [2024-10-17 19:27:10.789380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.619 [2024-10-17 19:27:10.789405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.789422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.619 [2024-10-17 19:27:10.789438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.789454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.619 [2024-10-17 19:27:10.789469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.789486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.619 [2024-10-17 19:27:10.789501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.789518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.619 [2024-10-17 19:27:10.789533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.789549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.619 [2024-10-17 19:27:10.789564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.789580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.619 [2024-10-17 19:27:10.789595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.789611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.619 [2024-10-17 19:27:10.789626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.789644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.619 [2024-10-17 19:27:10.789670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.789693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.619 [2024-10-17 19:27:10.789710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.789725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.619 [2024-10-17 19:27:10.789741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.789757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.619 [2024-10-17 19:27:10.789772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.789788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.619 [2024-10-17 19:27:10.789803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.789819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.619 [2024-10-17 19:27:10.789834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.789865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.619 [2024-10-17 19:27:10.789881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.789898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.619 [2024-10-17 19:27:10.789913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.789929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.619 [2024-10-17 19:27:10.789950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.789974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.619 [2024-10-17 19:27:10.789989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.790005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.619 [2024-10-17 19:27:10.790020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.790036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.619 [2024-10-17 19:27:10.790051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.790067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.619 [2024-10-17 19:27:10.790083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.790098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.619 [2024-10-17 19:27:10.790113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.790149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.619 [2024-10-17 19:27:10.790166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.790183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.619 [2024-10-17 19:27:10.790198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.790214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.619 [2024-10-17 19:27:10.790236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.790252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.619 [2024-10-17 19:27:10.790267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.790283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.619 [2024-10-17 19:27:10.790298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.790314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.619 [2024-10-17 19:27:10.790329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.790345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.619 [2024-10-17 19:27:10.790360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.790376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.619 [2024-10-17 19:27:10.790391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.790407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.619 [2024-10-17 19:27:10.790422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.790438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.619 [2024-10-17 19:27:10.790453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.790469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.619 [2024-10-17 19:27:10.790490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.790506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.619 [2024-10-17 19:27:10.790521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.790537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.619 [2024-10-17 19:27:10.790565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.790582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.619 [2024-10-17 19:27:10.790597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.790613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.619 [2024-10-17 19:27:10.790628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.619 [2024-10-17 19:27:10.790644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.619 [2024-10-17 19:27:10.790659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:10.790675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.620 [2024-10-17 19:27:10.790689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:10.790705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.620 [2024-10-17 19:27:10.790720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:10.790736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.620 [2024-10-17 19:27:10.790756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:10.790772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.620 [2024-10-17 19:27:10.790787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:10.790803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.620 [2024-10-17 19:27:10.790818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:10.790834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.620 [2024-10-17 19:27:10.790849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:10.790865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.620 [2024-10-17 19:27:10.790880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:10.790896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.620 [2024-10-17 19:27:10.790911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:10.790926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.620 [2024-10-17 19:27:10.790941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:10.790964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.620 [2024-10-17 19:27:10.790979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:10.790996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.620 [2024-10-17 19:27:10.791016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:10.791033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.620 [2024-10-17 19:27:10.791048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:10.791097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.620 [2024-10-17 19:27:10.791113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.620 [2024-10-17 19:27:10.791125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84072 len:8 PRP1 0x0 PRP2 0x0 00:25:16.620 [2024-10-17 19:27:10.791154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:10.791221] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x219d160 was disconnected and freed. reset controller. 00:25:16.620 [2024-10-17 19:27:10.791242] bdev_nvme.c:2019:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:25:16.620 [2024-10-17 19:27:10.791305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.620 [2024-10-17 19:27:10.791328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:10.791345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.620 [2024-10-17 19:27:10.791360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:10.791375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.620 [2024-10-17 19:27:10.791389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:10.791404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.620 [2024-10-17 19:27:10.791425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:10.791440] nvme_ctrlr.c:1152:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.620 [2024-10-17 19:27:10.795303] nvme_ctrlr.c:1770:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.620 [2024-10-17 19:27:10.795357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212e2e0 (9): Bad file descriptor 00:25:16.620 [2024-10-17 19:27:10.829213] bdev_nvme.c:2215:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:16.620 8634.50 IOPS, 33.73 MiB/s [2024-10-17T19:27:25.878Z] 8756.33 IOPS, 34.20 MiB/s [2024-10-17T19:27:25.878Z] 8848.25 IOPS, 34.56 MiB/s [2024-10-17T19:27:25.878Z] [2024-10-17 19:27:14.548678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.620 [2024-10-17 19:27:14.548794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:14.548825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.620 [2024-10-17 19:27:14.548871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:14.548889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.620 [2024-10-17 19:27:14.548904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:14.548920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.620 [2024-10-17 19:27:14.548934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:14.548948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.620 [2024-10-17 19:27:14.548962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:14.548977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.620 [2024-10-17 19:27:14.548991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:14.549006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.620 [2024-10-17 19:27:14.549020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:14.549034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.620 [2024-10-17 19:27:14.549065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:14.549081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.620 [2024-10-17 19:27:14.549096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:14.549121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.620 [2024-10-17 19:27:14.549136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:14.549172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.620 [2024-10-17 19:27:14.549197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:14.549212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.620 [2024-10-17 19:27:14.549226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:14.549241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.620 [2024-10-17 19:27:14.549256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:14.549272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.620 [2024-10-17 19:27:14.549286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:14.549310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.620 [2024-10-17 19:27:14.549326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:14.549341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.620 [2024-10-17 19:27:14.549356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:14.549388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.620 [2024-10-17 19:27:14.549402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.620 [2024-10-17 19:27:14.549420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.621 [2024-10-17 19:27:14.549434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.549448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.621 [2024-10-17 19:27:14.549462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.549477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.621 [2024-10-17 19:27:14.549490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.549505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.621 [2024-10-17 19:27:14.549518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.549533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.621 [2024-10-17 19:27:14.549546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.549571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.621 [2024-10-17 19:27:14.549584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.549598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.621 [2024-10-17 19:27:14.549612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.549627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.621 [2024-10-17 19:27:14.549640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.549654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.621 [2024-10-17 19:27:14.549668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.549682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.621 [2024-10-17 19:27:14.549702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.549730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.621 [2024-10-17 19:27:14.549745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.549759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.621 [2024-10-17 19:27:14.549773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.549788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.621 [2024-10-17 19:27:14.549802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.549817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.621 [2024-10-17 19:27:14.549831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.549857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.621 [2024-10-17 19:27:14.549892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.549910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.621 [2024-10-17 19:27:14.549925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.549943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.621 [2024-10-17 19:27:14.549958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.549974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.621 [2024-10-17 19:27:14.549989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.550005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.621 [2024-10-17 19:27:14.550020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.550036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.621 [2024-10-17 19:27:14.550050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.550067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.621 [2024-10-17 19:27:14.550082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.550098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.621 [2024-10-17 19:27:14.550113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.550138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.621 [2024-10-17 19:27:14.550166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.550198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.621 [2024-10-17 19:27:14.550229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.550244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.621 [2024-10-17 19:27:14.550261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.550276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.621 [2024-10-17 19:27:14.550291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.550307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.621 [2024-10-17 19:27:14.550321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.550352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.621 [2024-10-17 19:27:14.550366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.550381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.621 [2024-10-17 19:27:14.550395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.550410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.621 [2024-10-17 19:27:14.550423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.550439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.621 [2024-10-17 19:27:14.550453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.550467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.621 [2024-10-17 19:27:14.550481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.550496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.621 [2024-10-17 19:27:14.550510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.621 [2024-10-17 19:27:14.550525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.622 [2024-10-17 19:27:14.550539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.550554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.622 [2024-10-17 19:27:14.550568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.550601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.622 [2024-10-17 19:27:14.550615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.550630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.622 [2024-10-17 19:27:14.550644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.550658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.622 [2024-10-17 19:27:14.550672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.550687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.622 [2024-10-17 19:27:14.550701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.550745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.622 [2024-10-17 19:27:14.550759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.550774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.622 [2024-10-17 19:27:14.550788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.550803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.622 [2024-10-17 19:27:14.550825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.550841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.622 [2024-10-17 19:27:14.550855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.550870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.622 [2024-10-17 19:27:14.550884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.550899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.622 [2024-10-17 19:27:14.550913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.550928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.622 [2024-10-17 19:27:14.550942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.550957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.622 [2024-10-17 19:27:14.550972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.550987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.622 [2024-10-17 19:27:14.551007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.551023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.622 [2024-10-17 19:27:14.551054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.551070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.622 [2024-10-17 19:27:14.551084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.551100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.622 [2024-10-17 19:27:14.551114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.551145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.622 [2024-10-17 19:27:14.551159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.551175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.622 [2024-10-17 19:27:14.551189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.551204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.622 [2024-10-17 19:27:14.551228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.551247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.622 [2024-10-17 19:27:14.551262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.551277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.622 [2024-10-17 19:27:14.551292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.551307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.622 [2024-10-17 19:27:14.551321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.551336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.622 [2024-10-17 19:27:14.551351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.551367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.622 [2024-10-17 19:27:14.551382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.551399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.622 [2024-10-17 19:27:14.551413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.551435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.622 [2024-10-17 19:27:14.551450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.551465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.622 [2024-10-17 19:27:14.551479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.551505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.622 [2024-10-17 19:27:14.551520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.551535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.622 [2024-10-17 19:27:14.551549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.551570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.622 [2024-10-17 19:27:14.551585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.551601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.622 [2024-10-17 19:27:14.551615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.551630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.622 [2024-10-17 19:27:14.551646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.551660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.622 [2024-10-17 19:27:14.551675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.551690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.622 [2024-10-17 19:27:14.551704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.551730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.622 [2024-10-17 19:27:14.551744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.622 [2024-10-17 19:27:14.551759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.623 [2024-10-17 19:27:14.551773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.551804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.623 [2024-10-17 19:27:14.551817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.551831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.623 [2024-10-17 19:27:14.551851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.551866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.623 [2024-10-17 19:27:14.551880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.551895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.623 [2024-10-17 19:27:14.551908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.551923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.623 [2024-10-17 19:27:14.551936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.551951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.623 [2024-10-17 19:27:14.551965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.551980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.623 [2024-10-17 19:27:14.551994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.623 [2024-10-17 19:27:14.552028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-10-17 19:27:14.552056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-10-17 19:27:14.552090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-10-17 19:27:14.552120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-10-17 19:27:14.552158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-10-17 19:27:14.552188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-10-17 19:27:14.552217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-10-17 19:27:14.552252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-10-17 19:27:14.552282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-10-17 19:27:14.552311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-10-17 19:27:14.552339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-10-17 19:27:14.552367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-10-17 19:27:14.552395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-10-17 19:27:14.552426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-10-17 19:27:14.552454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-10-17 19:27:14.552482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-10-17 19:27:14.552517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.623 [2024-10-17 19:27:14.552546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.623 [2024-10-17 19:27:14.552580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.623 [2024-10-17 19:27:14.552608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.623 [2024-10-17 19:27:14.552655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.623 [2024-10-17 19:27:14.552683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.623 [2024-10-17 19:27:14.552712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.623 [2024-10-17 19:27:14.552740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.623 [2024-10-17 19:27:14.552769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-10-17 19:27:14.552798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-10-17 19:27:14.552826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-10-17 19:27:14.552856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-10-17 19:27:14.552885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-10-17 19:27:14.552913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-10-17 19:27:14.552941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.623 [2024-10-17 19:27:14.552969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.623 [2024-10-17 19:27:14.552988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a11d0 is same with the state(6) to be set 00:25:16.623 [2024-10-17 19:27:14.553010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.623 [2024-10-17 19:27:14.553022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.623 [2024-10-17 19:27:14.553033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96272 len:8 PRP1 0x0 PRP2 0x0 00:25:16.624 [2024-10-17 19:27:14.553050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:14.553117] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21a11d0 was disconnected and freed. reset controller. 00:25:16.624 [2024-10-17 19:27:14.553147] bdev_nvme.c:2019:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:25:16.624 [2024-10-17 19:27:14.553206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.624 [2024-10-17 19:27:14.553227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:14.553242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.624 [2024-10-17 19:27:14.553255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:14.553269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.624 [2024-10-17 19:27:14.553282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:14.553296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.624 [2024-10-17 19:27:14.553309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:14.553322] nvme_ctrlr.c:1152:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.624 [2024-10-17 19:27:14.556945] nvme_ctrlr.c:1770:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.624 [2024-10-17 19:27:14.556990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212e2e0 (9): Bad file descriptor 00:25:16.624 [2024-10-17 19:27:14.593670] bdev_nvme.c:2215:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:16.624 8760.60 IOPS, 34.22 MiB/s [2024-10-17T19:27:25.882Z] 8731.50 IOPS, 34.11 MiB/s [2024-10-17T19:27:25.882Z] 8748.14 IOPS, 34.17 MiB/s [2024-10-17T19:27:25.882Z] 8733.38 IOPS, 34.11 MiB/s [2024-10-17T19:27:25.882Z] 8721.33 IOPS, 34.07 MiB/s [2024-10-17T19:27:25.882Z] [2024-10-17 19:27:19.132264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.624 [2024-10-17 19:27:19.132356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:19.132389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.624 [2024-10-17 19:27:19.132408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:19.132425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.624 [2024-10-17 19:27:19.132441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:19.132458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.624 [2024-10-17 19:27:19.132474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:19.132543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.624 [2024-10-17 19:27:19.132561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:19.132584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.624 [2024-10-17 19:27:19.132599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:19.132625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-10-17 19:27:19.132652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:19.132668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-10-17 19:27:19.132684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:19.132700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-10-17 19:27:19.132727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:19.132744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-10-17 19:27:19.132759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:19.132776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-10-17 19:27:19.132792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:19.132809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-10-17 19:27:19.132825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:19.132842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-10-17 19:27:19.132858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:19.132874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-10-17 19:27:19.132890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:19.132906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-10-17 19:27:19.132921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:19.132937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-10-17 19:27:19.132952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:19.132968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-10-17 19:27:19.132996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:19.133029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-10-17 19:27:19.133045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:19.133066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-10-17 19:27:19.133081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:19.133102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-10-17 19:27:19.133117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:19.133148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-10-17 19:27:19.133165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:19.133181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.624 [2024-10-17 19:27:19.133196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:19.133212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.624 [2024-10-17 19:27:19.133227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:19.133244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:26320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.624 [2024-10-17 19:27:19.133259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:19.133275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.624 [2024-10-17 19:27:19.133290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:19.133307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.624 [2024-10-17 19:27:19.133322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.624 [2024-10-17 19:27:19.133338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.624 [2024-10-17 19:27:19.133353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.133370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.625 [2024-10-17 19:27:19.133385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.133410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.625 [2024-10-17 19:27:19.133435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.133451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.625 [2024-10-17 19:27:19.133475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.133492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.625 [2024-10-17 19:27:19.133508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.133525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.625 [2024-10-17 19:27:19.133540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.133557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.625 [2024-10-17 19:27:19.133572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.133591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.625 [2024-10-17 19:27:19.133607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.133624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.625 [2024-10-17 19:27:19.133640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.133656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.625 [2024-10-17 19:27:19.133671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.133688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.625 [2024-10-17 19:27:19.133703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.133719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.625 [2024-10-17 19:27:19.133746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.133763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.625 [2024-10-17 19:27:19.133779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.133795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.625 [2024-10-17 19:27:19.133810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.133827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.625 [2024-10-17 19:27:19.133842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.133881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.625 [2024-10-17 19:27:19.133905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.133930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.625 [2024-10-17 19:27:19.133946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.133963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.625 [2024-10-17 19:27:19.133979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.133996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.625 [2024-10-17 19:27:19.134011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.134028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.625 [2024-10-17 19:27:19.134042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.134060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.625 [2024-10-17 19:27:19.134075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.134092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.625 [2024-10-17 19:27:19.134107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.134123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.625 [2024-10-17 19:27:19.134151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.134170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.625 [2024-10-17 19:27:19.134186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.134203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.625 [2024-10-17 19:27:19.134218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.134234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.625 [2024-10-17 19:27:19.134250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.134266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.625 [2024-10-17 19:27:19.134281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.134299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.625 [2024-10-17 19:27:19.134314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.134330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.625 [2024-10-17 19:27:19.134352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.134369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.625 [2024-10-17 19:27:19.134385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.134402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.625 [2024-10-17 19:27:19.134418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.134434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.625 [2024-10-17 19:27:19.134449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.134465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.625 [2024-10-17 19:27:19.134480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.134497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.625 [2024-10-17 19:27:19.134512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.134528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.625 [2024-10-17 19:27:19.134543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.134560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.625 [2024-10-17 19:27:19.134575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.134591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.625 [2024-10-17 19:27:19.134607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.625 [2024-10-17 19:27:19.134623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.625 [2024-10-17 19:27:19.134638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.134665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.626 [2024-10-17 19:27:19.134680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.134697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.626 [2024-10-17 19:27:19.134712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.134729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.626 [2024-10-17 19:27:19.134754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.134777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.626 [2024-10-17 19:27:19.134795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.134811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.626 [2024-10-17 19:27:19.134827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.134843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.626 [2024-10-17 19:27:19.134859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.134875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.626 [2024-10-17 19:27:19.134891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.134908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.626 [2024-10-17 19:27:19.134923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.134939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.626 [2024-10-17 19:27:19.134954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.134971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.626 [2024-10-17 19:27:19.134986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.135002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.626 [2024-10-17 19:27:19.135017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.135034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.626 [2024-10-17 19:27:19.135051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.135067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.626 [2024-10-17 19:27:19.135082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.135099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.626 [2024-10-17 19:27:19.135114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.135140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.626 [2024-10-17 19:27:19.135157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.135174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.626 [2024-10-17 19:27:19.135189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.135213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.626 [2024-10-17 19:27:19.135230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.135248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.626 [2024-10-17 19:27:19.135264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.135280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.626 [2024-10-17 19:27:19.135296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.135313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.626 [2024-10-17 19:27:19.135328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.135351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.626 [2024-10-17 19:27:19.135368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.135385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.626 [2024-10-17 19:27:19.135408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.135424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.626 [2024-10-17 19:27:19.135439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.135455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:26008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.626 [2024-10-17 19:27:19.135471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.135487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:26016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.626 [2024-10-17 19:27:19.135502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.135519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.626 [2024-10-17 19:27:19.135534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.135551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.626 [2024-10-17 19:27:19.135566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.135583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:26040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.626 [2024-10-17 19:27:19.135619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.135636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.626 [2024-10-17 19:27:19.135659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.135676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.626 [2024-10-17 19:27:19.135692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.135708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.626 [2024-10-17 19:27:19.135723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.135739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:26072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.626 [2024-10-17 19:27:19.135755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.135771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:26080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.626 [2024-10-17 19:27:19.135787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.626 [2024-10-17 19:27:19.135809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.627 [2024-10-17 19:27:19.135824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.135841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.627 [2024-10-17 19:27:19.135856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.135873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.627 [2024-10-17 19:27:19.135888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.135904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:26112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.627 [2024-10-17 19:27:19.135919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.135935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:26120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.627 [2024-10-17 19:27:19.135950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.135966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.627 [2024-10-17 19:27:19.135982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.136007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.627 [2024-10-17 19:27:19.136022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.136039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.627 [2024-10-17 19:27:19.136054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.136085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.627 [2024-10-17 19:27:19.136101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.136118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.627 [2024-10-17 19:27:19.136145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.136163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.627 [2024-10-17 19:27:19.136184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.136201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.627 [2024-10-17 19:27:19.136216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.136233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:26184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.627 [2024-10-17 19:27:19.136248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.136264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.627 [2024-10-17 19:27:19.136280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.136296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.627 [2024-10-17 19:27:19.136311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.136328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.627 [2024-10-17 19:27:19.136343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.136364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.627 [2024-10-17 19:27:19.136379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.136395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.627 [2024-10-17 19:27:19.136411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.136427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.627 [2024-10-17 19:27:19.136442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.136458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.627 [2024-10-17 19:27:19.136474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.136490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.627 [2024-10-17 19:27:19.136531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.136549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.627 [2024-10-17 19:27:19.136565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.136581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.627 [2024-10-17 19:27:19.136597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.136613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:26200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.627 [2024-10-17 19:27:19.136628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.136644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.627 [2024-10-17 19:27:19.136660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.136676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.627 [2024-10-17 19:27:19.136692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.136708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.627 [2024-10-17 19:27:19.136728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.136744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.627 [2024-10-17 19:27:19.136760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.136776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:26240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.627 [2024-10-17 19:27:19.136793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.136809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:26248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.627 [2024-10-17 19:27:19.136824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.136840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1410 is same with the state(6) to be set 00:25:16.627 [2024-10-17 19:27:19.136860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.627 [2024-10-17 19:27:19.136872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.627 [2024-10-17 19:27:19.136884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26256 len:8 PRP1 0x0 PRP2 0x0 00:25:16.627 [2024-10-17 19:27:19.136904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.136975] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21a1410 was disconnected and freed. reset controller. 00:25:16.627 [2024-10-17 19:27:19.136995] bdev_nvme.c:2019:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:25:16.627 [2024-10-17 19:27:19.137054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.627 [2024-10-17 19:27:19.137087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.137105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.627 [2024-10-17 19:27:19.137119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.137150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.627 [2024-10-17 19:27:19.137167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.137182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.627 [2024-10-17 19:27:19.137196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.627 [2024-10-17 19:27:19.137210] nvme_ctrlr.c:1152:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.627 [2024-10-17 19:27:19.137258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212e2e0 (9): Bad file descriptor 00:25:16.627 [2024-10-17 19:27:19.141123] nvme_ctrlr.c:1770:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.627 [2024-10-17 19:27:19.173818] bdev_nvme.c:2215:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:16.627 8607.90 IOPS, 33.62 MiB/s [2024-10-17T19:27:25.885Z] 8518.64 IOPS, 33.28 MiB/s [2024-10-17T19:27:25.885Z] 8441.08 IOPS, 32.97 MiB/s [2024-10-17T19:27:25.885Z] 8383.92 IOPS, 32.75 MiB/s [2024-10-17T19:27:25.885Z] 8335.00 IOPS, 32.56 MiB/s [2024-10-17T19:27:25.885Z] 8279.00 IOPS, 32.34 MiB/s 00:25:16.627 Latency(us) 00:25:16.627 [2024-10-17T19:27:25.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.627 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:16.627 Verification LBA range: start 0x0 length 0x4000 00:25:16.627 NVMe0n1 : 15.01 8279.74 32.34 212.43 0.00 15037.31 580.89 17277.67 00:25:16.627 [2024-10-17T19:27:25.885Z] =================================================================================================================== 00:25:16.627 [2024-10-17T19:27:25.885Z] Total : 8279.74 32.34 212.43 0.00 15037.31 580.89 17277.67 00:25:16.627 Received shutdown signal, test time was about 15.000000 seconds 00:25:16.627 00:25:16.628 Latency(us) 00:25:16.628 [2024-10-17T19:27:25.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.628 [2024-10-17T19:27:25.886Z] =================================================================================================================== 00:25:16.628 [2024-10-17T19:27:25.886Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:16.628 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:16.628 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:16.628 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:16.628 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75560 00:25:16.628 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:16.628 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75560 /var/tmp/bdevperf.sock 00:25:16.628 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75560 ']' 00:25:16.628 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:16.628 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:16.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:16.628 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:16.628 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:16.628 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:16.628 19:27:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:16.628 19:27:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:16.628 19:27:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:25:16.628 [2024-10-17 19:27:25.633671] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:25:16.628 19:27:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:25:16.886 [2024-10-17 19:27:25.894147] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:25:16.886 19:27:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:17.150 NVMe0n1 00:25:17.150 19:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:17.408 00:25:17.408 19:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:17.666 00:25:17.923 19:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:17.923 19:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:18.197 19:27:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:18.463 19:27:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:21.744 19:27:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:21.744 19:27:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:21.744 19:27:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75635 00:25:21.744 19:27:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75635 00:25:21.744 19:27:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:23.118 { 00:25:23.118 "results": [ 00:25:23.118 { 00:25:23.118 "job": "NVMe0n1", 00:25:23.118 "core_mask": "0x1", 00:25:23.118 "workload": "verify", 00:25:23.118 "status": "finished", 00:25:23.118 "verify_range": { 00:25:23.118 "start": 0, 00:25:23.118 "length": 16384 00:25:23.118 }, 00:25:23.118 "queue_depth": 128, 00:25:23.118 "io_size": 4096, 00:25:23.118 "runtime": 1.007512, 00:25:23.118 "iops": 6181.564090551775, 00:25:23.118 "mibps": 24.14673472871787, 00:25:23.118 "io_failed": 0, 00:25:23.118 "io_timeout": 0, 00:25:23.118 "avg_latency_us": 20622.709499620483, 00:25:23.118 "min_latency_us": 2085.2363636363634, 00:25:23.118 "max_latency_us": 17158.516363636365 00:25:23.118 } 00:25:23.118 ], 00:25:23.118 "core_count": 1 00:25:23.118 } 00:25:23.118 19:27:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:23.118 [2024-10-17 19:27:24.989041] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:25:23.118 [2024-10-17 19:27:24.989196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75560 ] 00:25:23.118 [2024-10-17 19:27:25.125925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.118 [2024-10-17 19:27:25.205726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.118 [2024-10-17 19:27:25.281138] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:23.118 [2024-10-17 19:27:27.552790] bdev_nvme.c:2019:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:25:23.118 [2024-10-17 19:27:27.552977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.118 [2024-10-17 19:27:27.553005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.118 [2024-10-17 19:27:27.553027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.118 [2024-10-17 19:27:27.553042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.118 [2024-10-17 19:27:27.553058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.118 [2024-10-17 19:27:27.553072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.118 [2024-10-17 19:27:27.553088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.118 [2024-10-17 19:27:27.553103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.118 [2024-10-17 19:27:27.553118] nvme_ctrlr.c:1152:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:23.118 [2024-10-17 19:27:27.553197] nvme_ctrlr.c:1770:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:23.118 [2024-10-17 19:27:27.553242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8d2e0 (9): Bad file descriptor 00:25:23.118 [2024-10-17 19:27:27.560155] bdev_nvme.c:2215:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:23.118 Running I/O for 1 seconds... 00:25:23.118 6100.00 IOPS, 23.83 MiB/s 00:25:23.118 Latency(us) 00:25:23.118 [2024-10-17T19:27:32.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.118 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:23.118 Verification LBA range: start 0x0 length 0x4000 00:25:23.118 NVMe0n1 : 1.01 6181.56 24.15 0.00 0.00 20622.71 2085.24 17158.52 00:25:23.118 [2024-10-17T19:27:32.376Z] =================================================================================================================== 00:25:23.118 [2024-10-17T19:27:32.376Z] Total : 6181.56 24.15 0.00 0.00 20622.71 2085.24 17158.52 00:25:23.118 19:27:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:23.118 19:27:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:23.375 19:27:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:23.632 19:27:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:23.632 19:27:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:23.890 19:27:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:24.146 19:27:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:27.426 19:27:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:27.426 19:27:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:27.426 19:27:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75560 00:25:27.426 19:27:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75560 ']' 00:25:27.426 19:27:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75560 00:25:27.426 19:27:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:27.426 19:27:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:27.426 19:27:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75560 00:25:27.426 killing process with pid 75560 00:25:27.426 19:27:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:27.426 19:27:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:27.426 19:27:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75560' 00:25:27.426 19:27:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75560 00:25:27.427 19:27:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75560 00:25:27.684 19:27:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:27.684 19:27:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:27.942 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:27.942 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:27.942 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:27.942 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:27.942 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:27.942 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:27.942 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:27.942 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:27.942 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:27.942 rmmod nvme_tcp 00:25:27.942 rmmod nvme_fabrics 00:25:28.199 rmmod nvme_keyring 00:25:28.199 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:28.199 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:28.199 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:28.199 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 75303 ']' 00:25:28.199 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 75303 00:25:28.199 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75303 ']' 00:25:28.199 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75303 00:25:28.199 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:28.199 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:28.199 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75303 00:25:28.199 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:28.199 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:28.199 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75303' 00:25:28.199 killing process with pid 75303 00:25:28.199 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75303 00:25:28.199 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75303 00:25:28.456 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:28.456 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:28.456 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:28.456 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:28.456 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:28.456 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:25:28.456 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:25:28.456 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:28.456 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:28.456 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:28.456 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:28.456 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:28.456 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:28.456 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:28.456 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:28.456 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:28.456 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:28.456 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:28.714 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:28.714 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:28.714 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:28.714 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:28.714 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:28.714 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.714 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:28.714 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.714 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:25:28.714 00:25:28.714 real 0m33.428s 00:25:28.714 user 2m9.113s 00:25:28.714 sys 0m5.793s 00:25:28.714 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:28.714 ************************************ 00:25:28.714 19:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:28.714 END TEST nvmf_failover 00:25:28.714 ************************************ 00:25:28.714 19:27:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:28.714 19:27:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:28.714 19:27:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:28.714 19:27:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.714 ************************************ 00:25:28.714 START TEST nvmf_host_discovery 00:25:28.714 ************************************ 00:25:28.714 19:27:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:28.714 * Looking for test storage... 00:25:28.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:28.714 19:27:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:28.714 19:27:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:25:28.714 19:27:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:28.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.974 --rc genhtml_branch_coverage=1 00:25:28.974 --rc genhtml_function_coverage=1 00:25:28.974 --rc genhtml_legend=1 00:25:28.974 --rc geninfo_all_blocks=1 00:25:28.974 --rc geninfo_unexecuted_blocks=1 00:25:28.974 00:25:28.974 ' 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:28.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.974 --rc genhtml_branch_coverage=1 00:25:28.974 --rc genhtml_function_coverage=1 00:25:28.974 --rc genhtml_legend=1 00:25:28.974 --rc geninfo_all_blocks=1 00:25:28.974 --rc geninfo_unexecuted_blocks=1 00:25:28.974 00:25:28.974 ' 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:28.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.974 --rc genhtml_branch_coverage=1 00:25:28.974 --rc genhtml_function_coverage=1 00:25:28.974 --rc genhtml_legend=1 00:25:28.974 --rc geninfo_all_blocks=1 00:25:28.974 --rc geninfo_unexecuted_blocks=1 00:25:28.974 00:25:28.974 ' 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:28.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.974 --rc genhtml_branch_coverage=1 00:25:28.974 --rc genhtml_function_coverage=1 00:25:28.974 --rc genhtml_legend=1 00:25:28.974 --rc geninfo_all_blocks=1 00:25:28.974 --rc geninfo_unexecuted_blocks=1 00:25:28.974 00:25:28.974 ' 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.974 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:28.974 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@458 -- # nvmf_veth_init 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:28.975 Cannot find device "nvmf_init_br" 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:28.975 Cannot find device "nvmf_init_br2" 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:28.975 Cannot find device "nvmf_tgt_br" 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:28.975 Cannot find device "nvmf_tgt_br2" 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:28.975 Cannot find device "nvmf_init_br" 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:28.975 Cannot find device "nvmf_init_br2" 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:28.975 Cannot find device "nvmf_tgt_br" 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:28.975 Cannot find device "nvmf_tgt_br2" 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:28.975 Cannot find device "nvmf_br" 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:25:28.975 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:29.283 Cannot find device "nvmf_init_if" 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:29.283 Cannot find device "nvmf_init_if2" 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:29.283 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:29.283 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:29.283 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:29.283 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:25:29.283 00:25:29.283 --- 10.0.0.3 ping statistics --- 00:25:29.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.283 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:29.283 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:29.283 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:25:29.283 00:25:29.283 --- 10.0.0.4 ping statistics --- 00:25:29.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.283 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:29.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:29.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:25:29.283 00:25:29.283 --- 10.0.0.1 ping statistics --- 00:25:29.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.283 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:25:29.283 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:29.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:29.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:25:29.283 00:25:29.283 --- 10.0.0.2 ping statistics --- 00:25:29.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.283 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:25:29.284 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:29.284 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # return 0 00:25:29.284 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:29.284 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:29.284 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:29.284 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:29.284 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:29.284 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:29.284 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:29.284 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:29.284 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:29.284 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:29.284 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.284 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=75964 00:25:29.284 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 75964 00:25:29.284 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:29.284 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 75964 ']' 00:25:29.284 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.284 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:29.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.284 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.284 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:29.284 19:27:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.542 [2024-10-17 19:27:38.581196] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:25:29.542 [2024-10-17 19:27:38.581504] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:29.542 [2024-10-17 19:27:38.717525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.799 [2024-10-17 19:27:38.803522] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:29.800 [2024-10-17 19:27:38.803592] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:29.800 [2024-10-17 19:27:38.803607] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:29.800 [2024-10-17 19:27:38.803617] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:29.800 [2024-10-17 19:27:38.803627] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:29.800 [2024-10-17 19:27:38.804290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.800 [2024-10-17 19:27:38.881718] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:30.365 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:30.365 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:30.365 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:30.365 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:30.365 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.365 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:30.365 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:30.365 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.365 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.365 [2024-10-17 19:27:39.608898] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:30.365 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.365 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:25:30.365 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.365 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.365 [2024-10-17 19:27:39.621034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:25:30.623 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.623 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:30.623 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.623 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.623 null0 00:25:30.623 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.623 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:30.623 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.623 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.623 null1 00:25:30.623 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.623 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:30.623 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.623 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.623 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:30.623 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.623 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75993 00:25:30.623 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:30.623 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75993 /tmp/host.sock 00:25:30.623 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 75993 ']' 00:25:30.623 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:25:30.623 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:30.623 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:30.623 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:30.623 19:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.623 [2024-10-17 19:27:39.713584] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:25:30.623 [2024-10-17 19:27:39.713888] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75993 ] 00:25:30.623 [2024-10-17 19:27:39.854071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.880 [2024-10-17 19:27:39.924226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.880 [2024-10-17 19:27:39.980909] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:31.815 19:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:31.815 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.815 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:31.815 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:31.815 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:31.815 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:31.815 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:31.815 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:31.815 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.815 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.815 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.074 [2024-10-17 19:27:41.109397] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:32.074 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.343 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:25:32.343 19:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:32.615 [2024-10-17 19:27:41.754175] bdev_nvme.c:7260:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:25:32.615 [2024-10-17 19:27:41.754225] bdev_nvme.c:7346:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:25:32.615 [2024-10-17 19:27:41.754251] bdev_nvme.c:7223:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:25:32.615 [2024-10-17 19:27:41.760223] bdev_nvme.c:7189:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:25:32.615 [2024-10-17 19:27:41.817626] bdev_nvme.c:7079:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:25:32.615 [2024-10-17 19:27:41.817688] bdev_nvme.c:7038:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:25:33.181 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:33.181 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:33.181 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:33.181 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:33.181 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:33.181 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.181 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:33.181 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.181 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:33.181 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.181 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.181 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:33.181 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:33.181 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:33.181 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:33.181 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:33.181 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:33.181 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:33.181 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.182 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.182 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:33.182 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.182 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:33.182 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:33.182 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.441 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.701 [2024-10-17 19:27:42.710921] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:25:33.701 [2024-10-17 19:27:42.712065] bdev_nvme.c:7242:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:25:33.701 [2024-10-17 19:27:42.712110] bdev_nvme.c:7223:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:33.701 [2024-10-17 19:27:42.718040] bdev_nvme.c:7184:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.701 [2024-10-17 19:27:42.777571] bdev_nvme.c:7079:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:25:33.701 [2024-10-17 19:27:42.777594] bdev_nvme.c:7038:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:25:33.701 [2024-10-17 19:27:42.777602] bdev_nvme.c:7038:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.701 [2024-10-17 19:27:42.939379] bdev_nvme.c:7242:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:25:33.701 [2024-10-17 19:27:42.940852] bdev_nvme.c:7223:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.701 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:33.702 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:33.702 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:33.702 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:33.702 [2024-10-17 19:27:42.944737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.702 [2024-10-17 19:27:42.944775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.702 [2024-10-17 19:27:42.944789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.702 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:33.702 [2024-10-17 19:27:42.944799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.702 [2024-10-17 19:27:42.944810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.702 [2024-10-17 19:27:42.944819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.702 [2024-10-17 19:27:42.944829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.702 [2024-10-17 19:27:42.944838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.702 [2024-10-17 19:27:42.944847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48950 is same with the state(6) to be set 00:25:33.702 [2024-10-17 19:27:42.945964] bdev_nvme.c:7047:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:25:33.702 [2024-10-17 19:27:42.945997] bdev_nvme.c:7038:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:25:33.702 [2024-10-17 19:27:42.946072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c48950 (9): Bad file descriptor 00:25:33.702 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:33.702 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:33.702 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:33.702 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.702 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.702 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:33.702 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:33.971 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.971 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.971 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:33.971 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:33.971 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:33.971 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:33.971 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:33.971 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:33.971 19:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:33.971 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.971 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:33.971 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:33.971 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.971 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.971 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:33.971 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.971 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:33.971 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:33.971 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:33.971 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:33.971 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:33.971 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:33.971 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:33.971 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:33.972 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:34.231 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:34.232 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.232 19:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.250 [2024-10-17 19:27:44.344939] bdev_nvme.c:7260:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:25:35.250 [2024-10-17 19:27:44.344982] bdev_nvme.c:7346:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:25:35.250 [2024-10-17 19:27:44.345002] bdev_nvme.c:7223:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:25:35.250 [2024-10-17 19:27:44.350973] bdev_nvme.c:7189:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:25:35.250 [2024-10-17 19:27:44.412581] bdev_nvme.c:7079:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:25:35.250 [2024-10-17 19:27:44.412648] bdev_nvme.c:7038:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:25:35.250 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.250 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:35.250 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:35.250 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:35.250 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:35.250 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:35.250 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:35.251 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:35.251 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:35.251 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.251 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.251 request: 00:25:35.251 { 00:25:35.251 "name": "nvme", 00:25:35.251 "trtype": "tcp", 00:25:35.251 "traddr": "10.0.0.3", 00:25:35.251 "adrfam": "ipv4", 00:25:35.251 "trsvcid": "8009", 00:25:35.251 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:35.251 "wait_for_attach": true, 00:25:35.251 "method": "bdev_nvme_start_discovery", 00:25:35.251 "req_id": 1 00:25:35.251 } 00:25:35.251 Got JSON-RPC error response 00:25:35.251 response: 00:25:35.251 { 00:25:35.251 "code": -17, 00:25:35.251 "message": "File exists" 00:25:35.251 } 00:25:35.251 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:35.251 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:35.251 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:35.251 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:35.251 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:35.251 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:35.251 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:35.251 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.251 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:35.251 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.251 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:35.251 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:35.251 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.251 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:35.251 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:35.251 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:35.251 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.251 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:35.251 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.251 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:35.251 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.510 request: 00:25:35.510 { 00:25:35.510 "name": "nvme_second", 00:25:35.510 "trtype": "tcp", 00:25:35.510 "traddr": "10.0.0.3", 00:25:35.510 "adrfam": "ipv4", 00:25:35.510 "trsvcid": "8009", 00:25:35.510 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:35.510 "wait_for_attach": true, 00:25:35.510 "method": "bdev_nvme_start_discovery", 00:25:35.510 "req_id": 1 00:25:35.510 } 00:25:35.510 Got JSON-RPC error response 00:25:35.510 response: 00:25:35.510 { 00:25:35.510 "code": -17, 00:25:35.510 "message": "File exists" 00:25:35.510 } 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.510 19:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.445 [2024-10-17 19:27:45.681066] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.445 [2024-10-17 19:27:45.681178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce17b0 with addr=10.0.0.3, port=8010 00:25:36.445 [2024-10-17 19:27:45.681209] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:36.445 [2024-10-17 19:27:45.681220] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:36.445 [2024-10-17 19:27:45.681230] bdev_nvme.c:7328:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:25:37.818 [2024-10-17 19:27:46.681059] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.818 [2024-10-17 19:27:46.681170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce17b0 with addr=10.0.0.3, port=8010 00:25:37.818 [2024-10-17 19:27:46.681203] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:37.818 [2024-10-17 19:27:46.681214] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:37.818 [2024-10-17 19:27:46.681225] bdev_nvme.c:7328:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:25:38.754 [2024-10-17 19:27:47.680875] bdev_nvme.c:7303:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:25:38.754 request: 00:25:38.754 { 00:25:38.754 "name": "nvme_second", 00:25:38.754 "trtype": "tcp", 00:25:38.754 "traddr": "10.0.0.3", 00:25:38.754 "adrfam": "ipv4", 00:25:38.754 "trsvcid": "8010", 00:25:38.754 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:38.754 "wait_for_attach": false, 00:25:38.754 "attach_timeout_ms": 3000, 00:25:38.754 "method": "bdev_nvme_start_discovery", 00:25:38.754 "req_id": 1 00:25:38.754 } 00:25:38.754 Got JSON-RPC error response 00:25:38.754 response: 00:25:38.754 { 00:25:38.754 "code": -110, 00:25:38.754 "message": "Connection timed out" 00:25:38.754 } 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75993 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:38.754 rmmod nvme_tcp 00:25:38.754 rmmod nvme_fabrics 00:25:38.754 rmmod nvme_keyring 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 75964 ']' 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 75964 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 75964 ']' 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 75964 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75964 00:25:38.754 killing process with pid 75964 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75964' 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 75964 00:25:38.754 19:27:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 75964 00:25:39.012 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:39.012 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:39.012 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:39.012 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:39.012 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:25:39.012 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:39.012 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:25:39.012 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:39.012 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:39.012 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:39.012 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:39.012 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:39.012 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:39.012 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:39.012 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:39.012 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:39.012 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:39.012 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:39.271 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:39.271 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:39.271 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:39.271 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:39.271 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:39.271 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.271 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.271 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.271 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:25:39.271 00:25:39.271 real 0m10.576s 00:25:39.271 user 0m19.594s 00:25:39.271 sys 0m2.256s 00:25:39.271 ************************************ 00:25:39.271 END TEST nvmf_host_discovery 00:25:39.271 ************************************ 00:25:39.271 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:39.271 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.271 19:27:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:39.271 19:27:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:39.271 19:27:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:39.271 19:27:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.271 ************************************ 00:25:39.271 START TEST nvmf_host_multipath_status 00:25:39.271 ************************************ 00:25:39.271 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:39.530 * Looking for test storage... 00:25:39.530 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:39.530 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:39.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.530 --rc genhtml_branch_coverage=1 00:25:39.530 --rc genhtml_function_coverage=1 00:25:39.530 --rc genhtml_legend=1 00:25:39.530 --rc geninfo_all_blocks=1 00:25:39.531 --rc geninfo_unexecuted_blocks=1 00:25:39.531 00:25:39.531 ' 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:39.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.531 --rc genhtml_branch_coverage=1 00:25:39.531 --rc genhtml_function_coverage=1 00:25:39.531 --rc genhtml_legend=1 00:25:39.531 --rc geninfo_all_blocks=1 00:25:39.531 --rc geninfo_unexecuted_blocks=1 00:25:39.531 00:25:39.531 ' 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:39.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.531 --rc genhtml_branch_coverage=1 00:25:39.531 --rc genhtml_function_coverage=1 00:25:39.531 --rc genhtml_legend=1 00:25:39.531 --rc geninfo_all_blocks=1 00:25:39.531 --rc geninfo_unexecuted_blocks=1 00:25:39.531 00:25:39.531 ' 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:39.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.531 --rc genhtml_branch_coverage=1 00:25:39.531 --rc genhtml_function_coverage=1 00:25:39.531 --rc genhtml_legend=1 00:25:39.531 --rc geninfo_all_blocks=1 00:25:39.531 --rc geninfo_unexecuted_blocks=1 00:25:39.531 00:25:39.531 ' 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:39.531 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # nvmf_veth_init 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:39.531 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:39.790 Cannot find device "nvmf_init_br" 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:39.790 Cannot find device "nvmf_init_br2" 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:39.790 Cannot find device "nvmf_tgt_br" 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:39.790 Cannot find device "nvmf_tgt_br2" 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:39.790 Cannot find device "nvmf_init_br" 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:39.790 Cannot find device "nvmf_init_br2" 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:39.790 Cannot find device "nvmf_tgt_br" 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:39.790 Cannot find device "nvmf_tgt_br2" 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:39.790 Cannot find device "nvmf_br" 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:39.790 Cannot find device "nvmf_init_if" 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:39.790 Cannot find device "nvmf_init_if2" 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:39.790 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:39.790 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:39.790 19:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:39.790 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:39.790 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:39.790 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:39.790 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:39.790 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:39.790 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:39.790 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:39.790 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:40.049 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:40.049 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.123 ms 00:25:40.049 00:25:40.049 --- 10.0.0.3 ping statistics --- 00:25:40.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.049 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:40.049 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:40.049 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.089 ms 00:25:40.049 00:25:40.049 --- 10.0.0.4 ping statistics --- 00:25:40.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.049 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:40.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:40.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:25:40.049 00:25:40.049 --- 10.0.0.1 ping statistics --- 00:25:40.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.049 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:40.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:40.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:25:40.049 00:25:40.049 --- 10.0.0.2 ping statistics --- 00:25:40.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.049 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # return 0 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=76504 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 76504 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 76504 ']' 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:40.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:40.049 19:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:40.049 [2024-10-17 19:27:49.302118] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:25:40.049 [2024-10-17 19:27:49.302260] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.307 [2024-10-17 19:27:49.448600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:40.307 [2024-10-17 19:27:49.553520] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.307 [2024-10-17 19:27:49.553632] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.307 [2024-10-17 19:27:49.553660] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:40.307 [2024-10-17 19:27:49.553680] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:40.307 [2024-10-17 19:27:49.553696] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.307 [2024-10-17 19:27:49.555483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.307 [2024-10-17 19:27:49.555501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.565 [2024-10-17 19:27:49.636729] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:41.130 19:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:41.130 19:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:41.130 19:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:41.130 19:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:41.130 19:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:41.130 19:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:41.130 19:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76504 00:25:41.130 19:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:41.697 [2024-10-17 19:27:50.683722] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:41.697 19:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:41.955 Malloc0 00:25:41.955 19:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:42.213 19:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:42.471 19:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:42.729 [2024-10-17 19:27:51.918410] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:42.729 19:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:25:42.987 [2024-10-17 19:27:52.222787] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:25:42.987 19:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76565 00:25:42.987 19:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:43.244 19:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:43.244 19:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76565 /var/tmp/bdevperf.sock 00:25:43.244 19:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 76565 ']' 00:25:43.244 19:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:43.244 19:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:43.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:43.244 19:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:43.244 19:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:43.244 19:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:44.180 19:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:44.180 19:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:44.180 19:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:44.441 19:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:45.006 Nvme0n1 00:25:45.006 19:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:45.264 Nvme0n1 00:25:45.264 19:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:45.264 19:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:47.792 19:27:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:47.792 19:27:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:25:47.792 19:27:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:48.049 19:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:48.984 19:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:48.984 19:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:48.984 19:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.984 19:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:49.550 19:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.550 19:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:49.550 19:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.550 19:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:49.808 19:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:49.808 19:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:49.808 19:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.808 19:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:50.067 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.067 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:50.067 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.067 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:50.324 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.324 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:50.324 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.324 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:50.582 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.582 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:50.582 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.582 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:50.839 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.839 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:50.839 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:51.097 19:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:51.355 19:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:52.733 19:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:52.733 19:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:52.733 19:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.733 19:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:52.733 19:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:52.733 19:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:52.733 19:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.734 19:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:53.008 19:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.008 19:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:53.008 19:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.008 19:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:53.265 19:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.265 19:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:53.265 19:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:53.265 19:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.831 19:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.831 19:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:53.831 19:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.831 19:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:53.831 19:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.831 19:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:53.831 19:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.831 19:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:54.398 19:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.398 19:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:54.398 19:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:54.657 19:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:25:54.915 19:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:55.849 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:55.849 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:55.849 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.849 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:56.466 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.466 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:56.466 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.466 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:56.724 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:56.724 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:56.724 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.724 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:56.982 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.982 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:56.982 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.982 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:57.549 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.549 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:57.549 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:57.549 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.807 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.807 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:57.807 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.807 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:58.065 19:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.065 19:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:58.065 19:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:58.322 19:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:25:58.579 19:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:59.513 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:59.513 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:59.513 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.513 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:00.078 19:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.078 19:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:00.078 19:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.078 19:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:00.336 19:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:00.336 19:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:00.336 19:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:00.336 19:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.595 19:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.595 19:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:00.595 19:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.595 19:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:00.853 19:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.853 19:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:00.853 19:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.853 19:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:01.112 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.112 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:01.112 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.112 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:01.371 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:01.371 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:01.371 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:26:01.938 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:26:01.938 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:03.312 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:03.312 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:03.312 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.312 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:03.312 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:03.312 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:03.312 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.312 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:03.570 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:03.570 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:03.570 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.570 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:03.828 19:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.828 19:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:03.828 19:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.828 19:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:04.087 19:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.087 19:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:04.087 19:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.087 19:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:04.653 19:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:04.654 19:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:04.654 19:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:04.654 19:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.912 19:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:04.912 19:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:04.912 19:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:26:05.171 19:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:26:05.429 19:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:06.362 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:06.362 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:06.362 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.362 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:06.929 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:06.929 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:06.929 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.929 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:07.188 19:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.188 19:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:07.188 19:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.188 19:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:07.446 19:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.446 19:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:07.446 19:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.446 19:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:07.704 19:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.704 19:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:07.704 19:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.704 19:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:07.971 19:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:07.971 19:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:07.971 19:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.971 19:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:08.229 19:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.229 19:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:08.794 19:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:08.794 19:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:26:09.052 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:26:09.310 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:10.268 19:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:10.268 19:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:10.268 19:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.268 19:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:10.525 19:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.525 19:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:10.525 19:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:10.525 19:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.783 19:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.783 19:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:10.783 19:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:10.783 19:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.040 19:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.040 19:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:11.040 19:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:11.040 19:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.298 19:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.298 19:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:11.298 19:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.298 19:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:11.862 19:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.862 19:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:11.862 19:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.862 19:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:12.121 19:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.121 19:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:12.121 19:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:26:12.379 19:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:26:12.637 19:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:14.026 19:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:14.026 19:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:14.026 19:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.026 19:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:14.026 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:14.026 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:14.026 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:14.026 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.283 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.283 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:14.283 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.283 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:14.848 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.848 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:14.848 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.848 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:15.106 19:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.106 19:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:15.106 19:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.106 19:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:15.672 19:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.672 19:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:15.672 19:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.672 19:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:15.932 19:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.932 19:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:15.932 19:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:26:16.189 19:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:26:16.755 19:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:17.688 19:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:17.688 19:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:17.688 19:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:17.688 19:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.946 19:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.946 19:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:17.946 19:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.946 19:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:18.202 19:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.202 19:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:18.202 19:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.202 19:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:18.458 19:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.458 19:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:18.458 19:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.458 19:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:19.022 19:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.022 19:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:19.022 19:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.022 19:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:19.278 19:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.278 19:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:19.278 19:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.278 19:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:19.535 19:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.535 19:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:19.535 19:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:26:19.793 19:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:26:20.052 19:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:20.983 19:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:20.983 19:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:20.983 19:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.983 19:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:21.547 19:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.547 19:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:21.547 19:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.547 19:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:21.804 19:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:21.804 19:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:21.804 19:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.804 19:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:22.061 19:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.061 19:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:22.061 19:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.061 19:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:22.319 19:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.319 19:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:22.319 19:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:22.319 19:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.576 19:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.577 19:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:22.577 19:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.577 19:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:23.144 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:23.144 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76565 00:26:23.144 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 76565 ']' 00:26:23.144 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 76565 00:26:23.144 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:23.144 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:23.144 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76565 00:26:23.144 killing process with pid 76565 00:26:23.144 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:23.144 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:23.144 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76565' 00:26:23.144 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 76565 00:26:23.144 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 76565 00:26:23.144 { 00:26:23.144 "results": [ 00:26:23.144 { 00:26:23.144 "job": "Nvme0n1", 00:26:23.144 "core_mask": "0x4", 00:26:23.144 "workload": "verify", 00:26:23.144 "status": "terminated", 00:26:23.144 "verify_range": { 00:26:23.144 "start": 0, 00:26:23.144 "length": 16384 00:26:23.144 }, 00:26:23.144 "queue_depth": 128, 00:26:23.144 "io_size": 4096, 00:26:23.144 "runtime": 37.608368, 00:26:23.144 "iops": 8293.313870998072, 00:26:23.144 "mibps": 32.39575730858622, 00:26:23.144 "io_failed": 0, 00:26:23.144 "io_timeout": 0, 00:26:23.144 "avg_latency_us": 15401.753381985602, 00:26:23.144 "min_latency_us": 692.5963636363637, 00:26:23.144 "max_latency_us": 4026531.84 00:26:23.144 } 00:26:23.144 ], 00:26:23.144 "core_count": 1 00:26:23.144 } 00:26:23.144 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76565 00:26:23.144 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:23.144 [2024-10-17 19:27:52.305617] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:26:23.144 [2024-10-17 19:27:52.305746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76565 ] 00:26:23.144 [2024-10-17 19:27:52.438096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.144 [2024-10-17 19:27:52.517284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:23.144 [2024-10-17 19:27:52.590291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:23.144 Running I/O for 90 seconds... 00:26:23.144 7204.00 IOPS, 28.14 MiB/s [2024-10-17T19:28:32.402Z] 8200.00 IOPS, 32.03 MiB/s [2024-10-17T19:28:32.402Z] 8538.67 IOPS, 33.35 MiB/s [2024-10-17T19:28:32.402Z] 8690.00 IOPS, 33.95 MiB/s [2024-10-17T19:28:32.402Z] 8801.60 IOPS, 34.38 MiB/s [2024-10-17T19:28:32.402Z] 8891.33 IOPS, 34.73 MiB/s [2024-10-17T19:28:32.402Z] 8941.71 IOPS, 34.93 MiB/s [2024-10-17T19:28:32.402Z] 8965.50 IOPS, 35.02 MiB/s [2024-10-17T19:28:32.402Z] 8910.67 IOPS, 34.81 MiB/s [2024-10-17T19:28:32.402Z] 8830.20 IOPS, 34.49 MiB/s [2024-10-17T19:28:32.402Z] 8797.36 IOPS, 34.36 MiB/s [2024-10-17T19:28:32.402Z] 8770.83 IOPS, 34.26 MiB/s [2024-10-17T19:28:32.402Z] 8753.38 IOPS, 34.19 MiB/s [2024-10-17T19:28:32.402Z] 8739.57 IOPS, 34.14 MiB/s [2024-10-17T19:28:32.402Z] 8734.00 IOPS, 34.12 MiB/s [2024-10-17T19:28:32.402Z] 8718.00 IOPS, 34.05 MiB/s [2024-10-17T19:28:32.402Z] [2024-10-17 19:28:10.884874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.144 [2024-10-17 19:28:10.884970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:23.144 [2024-10-17 19:28:10.885035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.144 [2024-10-17 19:28:10.885056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:23.144 [2024-10-17 19:28:10.885079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.144 [2024-10-17 19:28:10.885094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:23.144 [2024-10-17 19:28:10.885118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.144 [2024-10-17 19:28:10.885150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:23.144 [2024-10-17 19:28:10.885185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.144 [2024-10-17 19:28:10.885201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:23.144 [2024-10-17 19:28:10.885224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.144 [2024-10-17 19:28:10.885239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:23.144 [2024-10-17 19:28:10.885261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.144 [2024-10-17 19:28:10.885276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:23.144 [2024-10-17 19:28:10.885299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.144 [2024-10-17 19:28:10.885314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:23.144 [2024-10-17 19:28:10.885337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:88400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.144 [2024-10-17 19:28:10.885390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:23.144 [2024-10-17 19:28:10.885416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:88408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.144 [2024-10-17 19:28:10.885432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:23.144 [2024-10-17 19:28:10.885454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:88416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.144 [2024-10-17 19:28:10.885471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:23.144 [2024-10-17 19:28:10.885492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:88424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.144 [2024-10-17 19:28:10.885507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:23.144 [2024-10-17 19:28:10.885529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:88432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.144 [2024-10-17 19:28:10.885545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:23.144 [2024-10-17 19:28:10.885567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:88440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.144 [2024-10-17 19:28:10.885585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:23.144 [2024-10-17 19:28:10.885606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.144 [2024-10-17 19:28:10.885621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.885642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.145 [2024-10-17 19:28:10.885657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.885679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:88464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.145 [2024-10-17 19:28:10.885694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.885716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:88472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.145 [2024-10-17 19:28:10.885732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.885753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:88480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.145 [2024-10-17 19:28:10.885768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.885795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:88488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.145 [2024-10-17 19:28:10.885810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.885831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.145 [2024-10-17 19:28:10.885847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.885881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.145 [2024-10-17 19:28:10.885898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.885932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:88512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.145 [2024-10-17 19:28:10.885949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.885971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:88520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.145 [2024-10-17 19:28:10.885988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.886009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:88528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.145 [2024-10-17 19:28:10.886025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.886047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:88536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.145 [2024-10-17 19:28:10.886062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.886084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:88544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.145 [2024-10-17 19:28:10.886100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.886122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:88552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.145 [2024-10-17 19:28:10.886150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.886174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:88560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.145 [2024-10-17 19:28:10.886189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.886213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:88568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.145 [2024-10-17 19:28:10.886243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.886276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:88576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.145 [2024-10-17 19:28:10.886293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.886317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:88584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.145 [2024-10-17 19:28:10.886337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.886367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.145 [2024-10-17 19:28:10.886384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.886418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.145 [2024-10-17 19:28:10.886436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.886458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.145 [2024-10-17 19:28:10.886473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.886495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:89000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.145 [2024-10-17 19:28:10.886512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.886534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.145 [2024-10-17 19:28:10.886549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.886571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:89016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.145 [2024-10-17 19:28:10.886586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.886608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.145 [2024-10-17 19:28:10.886624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.886646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.145 [2024-10-17 19:28:10.886661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.886683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.145 [2024-10-17 19:28:10.886698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.886720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.145 [2024-10-17 19:28:10.886736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.886757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.145 [2024-10-17 19:28:10.886773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.886794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.145 [2024-10-17 19:28:10.886810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.886831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.145 [2024-10-17 19:28:10.886848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.886869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:89080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.145 [2024-10-17 19:28:10.886894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.886917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.145 [2024-10-17 19:28:10.886933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.886955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.145 [2024-10-17 19:28:10.886971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.886992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:88592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.145 [2024-10-17 19:28:10.887008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.887030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:88600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.145 [2024-10-17 19:28:10.887047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:23.145 [2024-10-17 19:28:10.887069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:88608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.145 [2024-10-17 19:28:10.887084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.887106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:88616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.146 [2024-10-17 19:28:10.887122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.887158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.146 [2024-10-17 19:28:10.887174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.887197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:88632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.146 [2024-10-17 19:28:10.887213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.887235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:88640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.146 [2024-10-17 19:28:10.887251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.887273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.146 [2024-10-17 19:28:10.887288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.887311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:88656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.146 [2024-10-17 19:28:10.887326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.887348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:88664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.146 [2024-10-17 19:28:10.887371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.887396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:88672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.146 [2024-10-17 19:28:10.887411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.887433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:88680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.146 [2024-10-17 19:28:10.887449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.887471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:88688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.146 [2024-10-17 19:28:10.887486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.887508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:88696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.146 [2024-10-17 19:28:10.887524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.887546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:88704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.146 [2024-10-17 19:28:10.887561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.887584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:88712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.146 [2024-10-17 19:28:10.887600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.887626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.146 [2024-10-17 19:28:10.887642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.887665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.146 [2024-10-17 19:28:10.887680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.887702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.146 [2024-10-17 19:28:10.887717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.887739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:89128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.146 [2024-10-17 19:28:10.887755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.887776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.146 [2024-10-17 19:28:10.887792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.887816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.146 [2024-10-17 19:28:10.887833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.887863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.146 [2024-10-17 19:28:10.887880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.887901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.146 [2024-10-17 19:28:10.887917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.887939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.146 [2024-10-17 19:28:10.887954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.887977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.146 [2024-10-17 19:28:10.887992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.888014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.146 [2024-10-17 19:28:10.888029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.888050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.146 [2024-10-17 19:28:10.888067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.888089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.146 [2024-10-17 19:28:10.888104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.888140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.146 [2024-10-17 19:28:10.888160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.888183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.146 [2024-10-17 19:28:10.888198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.888220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.146 [2024-10-17 19:28:10.888236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.888257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:88720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.146 [2024-10-17 19:28:10.888273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.888295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:88728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.146 [2024-10-17 19:28:10.888311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.888342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:88736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.146 [2024-10-17 19:28:10.888358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.888380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.146 [2024-10-17 19:28:10.888396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.888419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.146 [2024-10-17 19:28:10.888435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.888458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.146 [2024-10-17 19:28:10.888473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.888495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.146 [2024-10-17 19:28:10.888511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.888532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:88776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.146 [2024-10-17 19:28:10.888547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.888569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:88784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.146 [2024-10-17 19:28:10.888585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.888612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:88792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.146 [2024-10-17 19:28:10.888627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.888650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:88800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.146 [2024-10-17 19:28:10.888666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.146 [2024-10-17 19:28:10.888687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:88808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.147 [2024-10-17 19:28:10.888704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.888726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:88816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.147 [2024-10-17 19:28:10.888741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.888763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:88824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.147 [2024-10-17 19:28:10.888789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.888812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:88832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.147 [2024-10-17 19:28:10.888835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.888860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:88840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.147 [2024-10-17 19:28:10.888876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.888902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.147 [2024-10-17 19:28:10.888918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.888940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.147 [2024-10-17 19:28:10.888956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.888978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:89248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.147 [2024-10-17 19:28:10.888993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.889015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.147 [2024-10-17 19:28:10.889031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.889053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:89264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.147 [2024-10-17 19:28:10.889068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.889090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:89272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.147 [2024-10-17 19:28:10.889106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.889139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.147 [2024-10-17 19:28:10.889157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.889179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.147 [2024-10-17 19:28:10.889194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.889216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:89296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.147 [2024-10-17 19:28:10.889231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.889253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:89304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.147 [2024-10-17 19:28:10.889268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.889289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:89312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.147 [2024-10-17 19:28:10.889314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.889339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:89320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.147 [2024-10-17 19:28:10.889355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.889378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.147 [2024-10-17 19:28:10.889395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.889417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.147 [2024-10-17 19:28:10.889447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.889471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.147 [2024-10-17 19:28:10.889488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.889510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:89352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.147 [2024-10-17 19:28:10.889526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.889547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:88848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.147 [2024-10-17 19:28:10.889563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.889585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:88856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.147 [2024-10-17 19:28:10.889600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.889622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:88864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.147 [2024-10-17 19:28:10.889638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.889661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.147 [2024-10-17 19:28:10.889676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.889698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.147 [2024-10-17 19:28:10.889713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.889735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:88888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.147 [2024-10-17 19:28:10.889751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.889773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.147 [2024-10-17 19:28:10.889789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.890625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.147 [2024-10-17 19:28:10.890655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.890691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.147 [2024-10-17 19:28:10.890708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.890739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.147 [2024-10-17 19:28:10.890755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.890784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:89376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.147 [2024-10-17 19:28:10.890800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.890839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.147 [2024-10-17 19:28:10.890856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.890885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.147 [2024-10-17 19:28:10.890901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:23.147 [2024-10-17 19:28:10.890931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.147 [2024-10-17 19:28:10.890952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:23.148 [2024-10-17 19:28:10.890982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:89408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.148 [2024-10-17 19:28:10.890998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:23.148 [2024-10-17 19:28:10.891042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:89416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.148 [2024-10-17 19:28:10.891063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:23.148 8360.12 IOPS, 32.66 MiB/s [2024-10-17T19:28:32.406Z] 7895.67 IOPS, 30.84 MiB/s [2024-10-17T19:28:32.406Z] 7480.11 IOPS, 29.22 MiB/s [2024-10-17T19:28:32.406Z] 7106.10 IOPS, 27.76 MiB/s [2024-10-17T19:28:32.406Z] 7039.29 IOPS, 27.50 MiB/s [2024-10-17T19:28:32.406Z] 7099.32 IOPS, 27.73 MiB/s [2024-10-17T19:28:32.406Z] 7145.09 IOPS, 27.91 MiB/s [2024-10-17T19:28:32.406Z] 7208.50 IOPS, 28.16 MiB/s [2024-10-17T19:28:32.406Z] 7296.80 IOPS, 28.50 MiB/s [2024-10-17T19:28:32.406Z] 7497.69 IOPS, 29.29 MiB/s [2024-10-17T19:28:32.406Z] 7690.15 IOPS, 30.04 MiB/s [2024-10-17T19:28:32.406Z] 7738.64 IOPS, 30.23 MiB/s [2024-10-17T19:28:32.406Z] 7758.14 IOPS, 30.31 MiB/s [2024-10-17T19:28:32.406Z] 7790.20 IOPS, 30.43 MiB/s [2024-10-17T19:28:32.406Z] 7829.74 IOPS, 30.58 MiB/s [2024-10-17T19:28:32.406Z] 7934.00 IOPS, 30.99 MiB/s [2024-10-17T19:28:32.406Z] 8047.88 IOPS, 31.44 MiB/s [2024-10-17T19:28:32.406Z] 8171.15 IOPS, 31.92 MiB/s [2024-10-17T19:28:32.406Z] [2024-10-17 19:28:29.204761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.148 [2024-10-17 19:28:29.204860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:23.148 [2024-10-17 19:28:29.204960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.148 [2024-10-17 19:28:29.204983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:23.148 [2024-10-17 19:28:29.205007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.148 [2024-10-17 19:28:29.205023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:23.148 [2024-10-17 19:28:29.205045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:54216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.148 [2024-10-17 19:28:29.205061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:23.148 [2024-10-17 19:28:29.205083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:54248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.148 [2024-10-17 19:28:29.205099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:23.148 [2024-10-17 19:28:29.205120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:54240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.148 [2024-10-17 19:28:29.205163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:23.148 [2024-10-17 19:28:29.205200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:54264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.148 [2024-10-17 19:28:29.205221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:23.148 [2024-10-17 19:28:29.205244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.148 [2024-10-17 19:28:29.205260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:23.148 [2024-10-17 19:28:29.205281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.148 [2024-10-17 19:28:29.205296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:23.148 [2024-10-17 19:28:29.205318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.148 [2024-10-17 19:28:29.205333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:23.148 [2024-10-17 19:28:29.205354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.148 [2024-10-17 19:28:29.205369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:23.148 [2024-10-17 19:28:29.205390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.148 [2024-10-17 19:28:29.205405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:23.148 [2024-10-17 19:28:29.205426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.148 [2024-10-17 19:28:29.205441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:23.148 [2024-10-17 19:28:29.205462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.148 [2024-10-17 19:28:29.205492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:23.148 [2024-10-17 19:28:29.205516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.148 [2024-10-17 19:28:29.205532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:23.148 [2024-10-17 19:28:29.205553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:54336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.148 [2024-10-17 19:28:29.205568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:23.148 [2024-10-17 19:28:29.205590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.148 [2024-10-17 19:28:29.205606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:23.148 [2024-10-17 19:28:29.205627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.148 [2024-10-17 19:28:29.205647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:23.148 [2024-10-17 19:28:29.205669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.148 [2024-10-17 19:28:29.205684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:23.148 [2024-10-17 19:28:29.205706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.148 [2024-10-17 19:28:29.205721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:23.148 [2024-10-17 19:28:29.205743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.148 [2024-10-17 19:28:29.205758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:23.148 [2024-10-17 19:28:29.205780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.148 [2024-10-17 19:28:29.205795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:23.148 [2024-10-17 19:28:29.205817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.148 [2024-10-17 19:28:29.205833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:23.148 [2024-10-17 19:28:29.205854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.148 [2024-10-17 19:28:29.205870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:23.148 [2024-10-17 19:28:29.205891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:54328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.149 [2024-10-17 19:28:29.205906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.205928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:54368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.149 [2024-10-17 19:28:29.205969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.205995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:54408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.149 [2024-10-17 19:28:29.206011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.206034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:54440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.149 [2024-10-17 19:28:29.206049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.206071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.149 [2024-10-17 19:28:29.206086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.206108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:54384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.149 [2024-10-17 19:28:29.206123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.206166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:54416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.149 [2024-10-17 19:28:29.206183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.206204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.149 [2024-10-17 19:28:29.206220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.206241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:54960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.149 [2024-10-17 19:28:29.206257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.206278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.149 [2024-10-17 19:28:29.206295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.206317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.149 [2024-10-17 19:28:29.206332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.206354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:54464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.149 [2024-10-17 19:28:29.206370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.206391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:54496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.149 [2024-10-17 19:28:29.206407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.206429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:54528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.149 [2024-10-17 19:28:29.206444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.206477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:55008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.149 [2024-10-17 19:28:29.206494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.206515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.149 [2024-10-17 19:28:29.206531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.206553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.149 [2024-10-17 19:28:29.206569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.206611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.149 [2024-10-17 19:28:29.206631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.206654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:54456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.149 [2024-10-17 19:28:29.206670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.206692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:54488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.149 [2024-10-17 19:28:29.206707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.206729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:54520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.149 [2024-10-17 19:28:29.206745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.207908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:54552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.149 [2024-10-17 19:28:29.207940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.207970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:54576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.149 [2024-10-17 19:28:29.207987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.208009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.149 [2024-10-17 19:28:29.208026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.208049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:54640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.149 [2024-10-17 19:28:29.208064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.208086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:55072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.149 [2024-10-17 19:28:29.208102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.208163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:55088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.149 [2024-10-17 19:28:29.208196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.208222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.149 [2024-10-17 19:28:29.208239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.208261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:55120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.149 [2024-10-17 19:28:29.208277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.208298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.149 [2024-10-17 19:28:29.208314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.208335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:54672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.149 [2024-10-17 19:28:29.208351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.208373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:54704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.149 [2024-10-17 19:28:29.208388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.208410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.149 [2024-10-17 19:28:29.208425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:23.149 [2024-10-17 19:28:29.208447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.149 [2024-10-17 19:28:29.208463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:23.150 [2024-10-17 19:28:29.208504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.150 [2024-10-17 19:28:29.208524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.150 [2024-10-17 19:28:29.208547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.150 [2024-10-17 19:28:29.208562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.150 [2024-10-17 19:28:29.208584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.150 [2024-10-17 19:28:29.208600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:23.150 [2024-10-17 19:28:29.208622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:54544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.150 [2024-10-17 19:28:29.208637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:23.150 [2024-10-17 19:28:29.208659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:54584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.150 [2024-10-17 19:28:29.208687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:23.150 [2024-10-17 19:28:29.208711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:54616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.150 [2024-10-17 19:28:29.208728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:23.150 [2024-10-17 19:28:29.208750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:54648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.150 [2024-10-17 19:28:29.208765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:23.150 [2024-10-17 19:28:29.208787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:54664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.150 [2024-10-17 19:28:29.208803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:23.150 8250.34 IOPS, 32.23 MiB/s [2024-10-17T19:28:32.408Z] 8275.17 IOPS, 32.32 MiB/s [2024-10-17T19:28:32.408Z] 8290.86 IOPS, 32.39 MiB/s [2024-10-17T19:28:32.408Z] Received shutdown signal, test time was about 37.609197 seconds 00:26:23.150 00:26:23.150 Latency(us) 00:26:23.150 [2024-10-17T19:28:32.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.150 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:23.150 Verification LBA range: start 0x0 length 0x4000 00:26:23.150 Nvme0n1 : 37.61 8293.31 32.40 0.00 0.00 15401.75 692.60 4026531.84 00:26:23.150 [2024-10-17T19:28:32.408Z] =================================================================================================================== 00:26:23.150 [2024-10-17T19:28:32.408Z] Total : 8293.31 32.40 0.00 0.00 15401.75 692.60 4026531.84 00:26:23.150 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:23.715 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:23.716 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:23.716 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:23.716 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:23.716 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:23.716 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:23.716 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:23.716 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:23.716 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:23.716 rmmod nvme_tcp 00:26:23.716 rmmod nvme_fabrics 00:26:23.716 rmmod nvme_keyring 00:26:23.716 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:23.716 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:23.716 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:23.716 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 76504 ']' 00:26:23.716 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 76504 00:26:23.716 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 76504 ']' 00:26:23.716 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 76504 00:26:23.716 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:23.716 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:23.716 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76504 00:26:23.716 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:23.716 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:23.716 killing process with pid 76504 00:26:23.716 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76504' 00:26:23.716 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 76504 00:26:23.716 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 76504 00:26:23.974 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:23.974 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:23.974 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:23.974 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:23.974 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:26:23.974 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:23.974 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:26:23.974 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:23.974 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:23.974 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:23.974 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:23.974 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:23.974 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:23.974 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:23.974 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:23.974 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:23.974 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:23.974 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:23.974 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:23.974 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:23.974 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:23.974 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:24.232 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:24.232 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.232 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:24.232 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.232 ************************************ 00:26:24.232 END TEST nvmf_host_multipath_status 00:26:24.232 ************************************ 00:26:24.232 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:26:24.232 00:26:24.232 real 0m44.757s 00:26:24.232 user 2m25.104s 00:26:24.232 sys 0m13.116s 00:26:24.232 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:24.232 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:24.232 19:28:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:24.232 19:28:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:24.232 19:28:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:24.232 19:28:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.232 ************************************ 00:26:24.232 START TEST nvmf_discovery_remove_ifc 00:26:24.232 ************************************ 00:26:24.232 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:24.232 * Looking for test storage... 00:26:24.232 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:24.232 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:24.232 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:26:24.232 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:24.490 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:24.490 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:24.490 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:24.490 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:24.490 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:24.490 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:24.490 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:24.490 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:24.490 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:24.490 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:24.490 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:24.490 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:24.490 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:24.490 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:24.490 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:24.490 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:24.490 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:24.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.491 --rc genhtml_branch_coverage=1 00:26:24.491 --rc genhtml_function_coverage=1 00:26:24.491 --rc genhtml_legend=1 00:26:24.491 --rc geninfo_all_blocks=1 00:26:24.491 --rc geninfo_unexecuted_blocks=1 00:26:24.491 00:26:24.491 ' 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:24.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.491 --rc genhtml_branch_coverage=1 00:26:24.491 --rc genhtml_function_coverage=1 00:26:24.491 --rc genhtml_legend=1 00:26:24.491 --rc geninfo_all_blocks=1 00:26:24.491 --rc geninfo_unexecuted_blocks=1 00:26:24.491 00:26:24.491 ' 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:24.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.491 --rc genhtml_branch_coverage=1 00:26:24.491 --rc genhtml_function_coverage=1 00:26:24.491 --rc genhtml_legend=1 00:26:24.491 --rc geninfo_all_blocks=1 00:26:24.491 --rc geninfo_unexecuted_blocks=1 00:26:24.491 00:26:24.491 ' 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:24.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.491 --rc genhtml_branch_coverage=1 00:26:24.491 --rc genhtml_function_coverage=1 00:26:24.491 --rc genhtml_legend=1 00:26:24.491 --rc geninfo_all_blocks=1 00:26:24.491 --rc geninfo_unexecuted_blocks=1 00:26:24.491 00:26:24.491 ' 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:24.491 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:24.491 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@458 -- # nvmf_veth_init 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:24.492 Cannot find device "nvmf_init_br" 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:24.492 Cannot find device "nvmf_init_br2" 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:24.492 Cannot find device "nvmf_tgt_br" 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:24.492 Cannot find device "nvmf_tgt_br2" 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:24.492 Cannot find device "nvmf_init_br" 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:24.492 Cannot find device "nvmf_init_br2" 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:24.492 Cannot find device "nvmf_tgt_br" 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:24.492 Cannot find device "nvmf_tgt_br2" 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:24.492 Cannot find device "nvmf_br" 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:24.492 Cannot find device "nvmf_init_if" 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:24.492 Cannot find device "nvmf_init_if2" 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:24.492 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:24.492 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:24.492 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:24.752 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:24.752 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:24.752 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:24.752 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:24.752 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:24.752 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:24.752 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:24.752 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:24.752 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:24.752 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:24.752 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:24.752 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:24.752 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:24.752 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:24.752 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:24.752 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:24.752 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:24.752 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:24.752 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:24.752 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:24.752 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:24.753 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:24.753 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.143 ms 00:26:24.753 00:26:24.753 --- 10.0.0.3 ping statistics --- 00:26:24.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.753 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:24.753 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:24.753 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:26:24.753 00:26:24.753 --- 10.0.0.4 ping statistics --- 00:26:24.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.753 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:24.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:24.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:26:24.753 00:26:24.753 --- 10.0.0.1 ping statistics --- 00:26:24.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.753 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:24.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:24.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:26:24.753 00:26:24.753 --- 10.0.0.2 ping statistics --- 00:26:24.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.753 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # return 0 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=77448 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 77448 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 77448 ']' 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:24.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:24.753 19:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:25.011 [2024-10-17 19:28:34.023201] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:26:25.012 [2024-10-17 19:28:34.023371] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:25.012 [2024-10-17 19:28:34.166883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.012 [2024-10-17 19:28:34.229309] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:25.012 [2024-10-17 19:28:34.229379] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:25.012 [2024-10-17 19:28:34.229392] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:25.012 [2024-10-17 19:28:34.229401] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:25.012 [2024-10-17 19:28:34.229409] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:25.012 [2024-10-17 19:28:34.229828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.270 [2024-10-17 19:28:34.284127] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:25.270 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:25.270 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:25.270 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:25.270 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:25.270 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:25.270 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:25.270 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:25.270 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.270 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:25.270 [2024-10-17 19:28:34.401939] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:25.270 [2024-10-17 19:28:34.410098] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:26:25.270 null0 00:26:25.270 [2024-10-17 19:28:34.442086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:25.270 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.270 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77474 00:26:25.270 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77474 /tmp/host.sock 00:26:25.270 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 77474 ']' 00:26:25.270 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:26:25.270 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:25.270 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:25.270 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:25.270 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:25.270 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:25.270 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:25.528 [2024-10-17 19:28:34.529680] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:26:25.528 [2024-10-17 19:28:34.529806] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77474 ] 00:26:25.528 [2024-10-17 19:28:34.666203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.528 [2024-10-17 19:28:34.756354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.786 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:25.786 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:25.786 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:25.786 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:25.786 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.786 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:25.786 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.786 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:25.786 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.786 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:25.786 [2024-10-17 19:28:34.863722] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:25.786 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.786 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:25.786 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.786 19:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:26.718 [2024-10-17 19:28:35.925393] bdev_nvme.c:7260:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:26:26.718 [2024-10-17 19:28:35.925440] bdev_nvme.c:7346:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:26:26.718 [2024-10-17 19:28:35.925461] bdev_nvme.c:7223:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:26.718 [2024-10-17 19:28:35.931442] bdev_nvme.c:7189:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:26:26.976 [2024-10-17 19:28:35.989367] bdev_nvme.c:8056:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:26.976 [2024-10-17 19:28:35.989461] bdev_nvme.c:8056:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:26.976 [2024-10-17 19:28:35.989492] bdev_nvme.c:8056:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:26.976 [2024-10-17 19:28:35.989511] bdev_nvme.c:7079:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:26:26.976 [2024-10-17 19:28:35.989542] bdev_nvme.c:7038:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:26:26.976 19:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.976 19:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:26.976 [2024-10-17 19:28:35.993989] bdev_nvme.c:1739:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x132f400 was disconnected and fre 19:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:26.976 ed. delete nvme_qpair. 00:26:26.976 19:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:26.976 19:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:26.976 19:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:26.976 19:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.976 19:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:26.976 19:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:26.976 19:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.976 19:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:26.976 19:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:26:26.976 19:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:26:26.976 19:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:26.976 19:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:26.976 19:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:26.976 19:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.976 19:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:26.976 19:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:26.976 19:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:26.976 19:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:26.976 19:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.976 19:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:26.976 19:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:27.909 19:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:27.909 19:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:27.909 19:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.909 19:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:27.909 19:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:27.909 19:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:27.909 19:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:27.909 19:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.178 19:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:28.178 19:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:29.172 19:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:29.172 19:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:29.172 19:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:29.172 19:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.172 19:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:29.172 19:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.172 19:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:29.172 19:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.172 19:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:29.172 19:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:30.105 19:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:30.105 19:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:30.105 19:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:30.105 19:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.105 19:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.105 19:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:30.105 19:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:30.105 19:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.105 19:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:30.105 19:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:31.478 19:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:31.478 19:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.478 19:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.478 19:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.478 19:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:31.478 19:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:31.478 19:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:31.478 19:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.478 19:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:31.478 19:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:32.413 19:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:32.413 19:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:32.413 19:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:32.413 19:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.413 19:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.413 19:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:32.413 19:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:32.413 [2024-10-17 19:28:41.416824] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:32.413 [2024-10-17 19:28:41.416908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.413 [2024-10-17 19:28:41.416928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.413 [2024-10-17 19:28:41.416944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.413 [2024-10-17 19:28:41.416955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.413 [2024-10-17 19:28:41.416967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.413 [2024-10-17 19:28:41.416978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.413 [2024-10-17 19:28:41.416991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.413 [2024-10-17 19:28:41.417002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.413 [2024-10-17 19:28:41.417014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.413 [2024-10-17 19:28:41.417025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.413 [2024-10-17 19:28:41.417036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302f70 is same with the state(6) to be set 00:26:32.413 19:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.413 [2024-10-17 19:28:41.426815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1302f70 (9): Bad file descriptor 00:26:32.413 [2024-10-17 19:28:41.436867] nvme_ctrlr.c:1770:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:32.413 19:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:32.413 19:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:33.346 19:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:33.346 19:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:33.346 19:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:33.346 19:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.346 19:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:33.346 19:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:33.346 19:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:33.346 [2024-10-17 19:28:42.498224] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:26:33.346 [2024-10-17 19:28:42.498364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1302f70 with addr=10.0.0.3, port=4420 00:26:33.346 [2024-10-17 19:28:42.498404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302f70 is same with the state(6) to be set 00:26:33.346 [2024-10-17 19:28:42.498485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1302f70 (9): Bad file descriptor 00:26:33.346 [2024-10-17 19:28:42.499424] bdev_nvme.c:3063:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:33.346 [2024-10-17 19:28:42.499506] nvme_ctrlr.c:4250:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:33.346 [2024-10-17 19:28:42.499530] nvme_ctrlr.c:1868:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:33.346 [2024-10-17 19:28:42.499553] nvme_ctrlr.c:1152:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:33.346 [2024-10-17 19:28:42.499624] bdev_nvme.c:2213:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.346 [2024-10-17 19:28:42.499652] nvme_ctrlr.c:1770:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:33.346 19:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.346 19:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:33.346 19:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:34.282 [2024-10-17 19:28:43.499714] nvme_ctrlr.c:1152:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:34.282 [2024-10-17 19:28:43.499809] nvme_ctrlr.c:4250:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:34.282 [2024-10-17 19:28:43.499824] nvme_ctrlr.c:1868:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:34.282 [2024-10-17 19:28:43.499836] nvme_ctrlr.c:1140:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:34.282 [2024-10-17 19:28:43.499868] bdev_nvme.c:2213:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.282 [2024-10-17 19:28:43.499904] bdev_nvme.c:7011:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:26:34.282 [2024-10-17 19:28:43.499976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.282 [2024-10-17 19:28:43.499993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.282 [2024-10-17 19:28:43.500009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.282 [2024-10-17 19:28:43.500019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.282 [2024-10-17 19:28:43.500030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.282 [2024-10-17 19:28:43.500040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.282 [2024-10-17 19:28:43.500051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.282 [2024-10-17 19:28:43.500060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.282 [2024-10-17 19:28:43.500072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.282 [2024-10-17 19:28:43.500081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.282 [2024-10-17 19:28:43.500091] nvme_ctrlr.c:1152:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:34.282 [2024-10-17 19:28:43.500629] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1297d70 (9): Bad file descriptor 00:26:34.282 [2024-10-17 19:28:43.501644] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:34.282 [2024-10-17 19:28:43.502753] nvme_ctrlr.c:1259:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:34.282 19:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:34.282 19:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:34.282 19:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:34.282 19:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.282 19:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.282 19:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:34.282 19:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:34.539 19:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.539 19:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:34.539 19:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:34.539 19:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:34.539 19:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:34.539 19:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:34.539 19:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:34.539 19:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:34.539 19:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:34.539 19:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.539 19:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.539 19:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:34.539 19:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.539 19:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:34.539 19:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:35.471 19:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:35.471 19:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:35.471 19:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:35.471 19:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.471 19:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:35.471 19:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:35.471 19:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:35.471 19:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.729 19:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:35.729 19:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:36.294 [2024-10-17 19:28:45.510246] bdev_nvme.c:7260:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:26:36.294 [2024-10-17 19:28:45.510546] bdev_nvme.c:7346:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:26:36.294 [2024-10-17 19:28:45.510621] bdev_nvme.c:7223:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:36.294 [2024-10-17 19:28:45.516315] bdev_nvme.c:7189:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:26:36.551 [2024-10-17 19:28:45.573772] bdev_nvme.c:8056:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:36.551 [2024-10-17 19:28:45.574201] bdev_nvme.c:8056:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:36.551 [2024-10-17 19:28:45.574363] bdev_nvme.c:8056:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:36.551 [2024-10-17 19:28:45.574431] bdev_nvme.c:7079:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:26:36.551 [2024-10-17 19:28:45.574705] bdev_nvme.c:7038:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:26:36.551 [2024-10-17 19:28:45.578858] bdev_nvme.c:1739:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x133bb60 was disconnected and freed. delete nvme_qpair. 00:26:36.551 19:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:36.551 19:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:36.551 19:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:36.551 19:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:36.551 19:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.551 19:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:36.551 19:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:36.551 19:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.551 19:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:36.551 19:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:36.551 19:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77474 00:26:36.551 19:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 77474 ']' 00:26:36.551 19:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 77474 00:26:36.551 19:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:36.551 19:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:36.551 19:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77474 00:26:36.808 killing process with pid 77474 00:26:36.808 19:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:36.808 19:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:36.808 19:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77474' 00:26:36.808 19:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 77474 00:26:36.808 19:28:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 77474 00:26:36.808 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:36.808 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:36.808 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:37.068 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:37.068 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:37.068 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:37.068 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:37.068 rmmod nvme_tcp 00:26:37.068 rmmod nvme_fabrics 00:26:37.068 rmmod nvme_keyring 00:26:37.068 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:37.068 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:37.068 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:37.068 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 77448 ']' 00:26:37.068 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 77448 00:26:37.068 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 77448 ']' 00:26:37.068 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 77448 00:26:37.068 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:37.068 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:37.068 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77448 00:26:37.068 killing process with pid 77448 00:26:37.068 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:37.068 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:37.068 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77448' 00:26:37.068 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 77448 00:26:37.068 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 77448 00:26:37.326 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:37.326 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:37.326 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:37.326 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:37.326 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:26:37.326 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:37.326 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:26:37.326 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:37.326 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:37.326 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:37.326 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:37.326 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:37.326 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:37.326 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:37.326 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:37.326 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:37.326 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:37.326 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:37.326 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:37.326 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:37.584 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:37.584 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:37.584 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:37.584 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.584 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:37.584 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.584 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:26:37.584 00:26:37.584 real 0m13.355s 00:26:37.584 user 0m22.547s 00:26:37.584 sys 0m2.559s 00:26:37.584 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:37.584 ************************************ 00:26:37.584 END TEST nvmf_discovery_remove_ifc 00:26:37.584 ************************************ 00:26:37.584 19:28:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:37.584 19:28:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:37.584 19:28:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:37.584 19:28:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:37.584 19:28:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.584 ************************************ 00:26:37.584 START TEST nvmf_identify_kernel_target 00:26:37.584 ************************************ 00:26:37.584 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:37.584 * Looking for test storage... 00:26:37.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:37.584 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:37.584 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:26:37.584 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:37.842 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:37.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.843 --rc genhtml_branch_coverage=1 00:26:37.843 --rc genhtml_function_coverage=1 00:26:37.843 --rc genhtml_legend=1 00:26:37.843 --rc geninfo_all_blocks=1 00:26:37.843 --rc geninfo_unexecuted_blocks=1 00:26:37.843 00:26:37.843 ' 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:37.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.843 --rc genhtml_branch_coverage=1 00:26:37.843 --rc genhtml_function_coverage=1 00:26:37.843 --rc genhtml_legend=1 00:26:37.843 --rc geninfo_all_blocks=1 00:26:37.843 --rc geninfo_unexecuted_blocks=1 00:26:37.843 00:26:37.843 ' 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:37.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.843 --rc genhtml_branch_coverage=1 00:26:37.843 --rc genhtml_function_coverage=1 00:26:37.843 --rc genhtml_legend=1 00:26:37.843 --rc geninfo_all_blocks=1 00:26:37.843 --rc geninfo_unexecuted_blocks=1 00:26:37.843 00:26:37.843 ' 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:37.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.843 --rc genhtml_branch_coverage=1 00:26:37.843 --rc genhtml_function_coverage=1 00:26:37.843 --rc genhtml_legend=1 00:26:37.843 --rc geninfo_all_blocks=1 00:26:37.843 --rc geninfo_unexecuted_blocks=1 00:26:37.843 00:26:37.843 ' 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:37.843 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # nvmf_veth_init 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:37.843 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:37.844 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:37.844 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:37.844 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:37.844 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:37.844 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:37.844 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:37.844 Cannot find device "nvmf_init_br" 00:26:37.844 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:26:37.844 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:37.844 Cannot find device "nvmf_init_br2" 00:26:37.844 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:26:37.844 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:37.844 Cannot find device "nvmf_tgt_br" 00:26:37.844 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:26:37.844 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:37.844 Cannot find device "nvmf_tgt_br2" 00:26:37.844 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:26:37.844 19:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:37.844 Cannot find device "nvmf_init_br" 00:26:37.844 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:26:37.844 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:37.844 Cannot find device "nvmf_init_br2" 00:26:37.844 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:26:37.844 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:37.844 Cannot find device "nvmf_tgt_br" 00:26:37.844 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:26:37.844 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:37.844 Cannot find device "nvmf_tgt_br2" 00:26:37.844 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:26:37.844 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:37.844 Cannot find device "nvmf_br" 00:26:37.844 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:26:37.844 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:37.844 Cannot find device "nvmf_init_if" 00:26:37.844 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:26:37.844 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:37.844 Cannot find device "nvmf_init_if2" 00:26:37.844 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:26:37.844 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:37.844 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:37.844 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:26:37.844 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:37.844 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:37.844 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:26:37.844 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:37.844 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:38.104 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:38.104 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:26:38.104 00:26:38.104 --- 10.0.0.3 ping statistics --- 00:26:38.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.104 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:38.104 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:38.104 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.113 ms 00:26:38.104 00:26:38.104 --- 10.0.0.4 ping statistics --- 00:26:38.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.104 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:38.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:38.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:26:38.104 00:26:38.104 --- 10.0.0.1 ping statistics --- 00:26:38.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.104 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:38.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:38.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:26:38.104 00:26:38.104 --- 10.0.0.2 ping statistics --- 00:26:38.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.104 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # return 0 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:38.104 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:38.372 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:38.372 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:38.372 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:26:38.372 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:38.372 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:38.372 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.372 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.372 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:38.372 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.372 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:38.372 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:38.372 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:38.372 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:38.372 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:38.372 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:38.372 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:26:38.372 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:38.372 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:38.372 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:38.372 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:26:38.372 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:26:38.372 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:26:38.372 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:38.372 19:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:38.629 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:38.629 Waiting for block devices as requested 00:26:38.629 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:38.887 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:38.887 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:26:38.887 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:38.887 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:26:38.887 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:38.887 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:38.887 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:38.887 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:26:38.887 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:38.887 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:38.887 No valid GPT data, bailing 00:26:38.887 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:38.887 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:38.887 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:38.887 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:26:38.887 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:26:38.887 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n2 ]] 00:26:38.887 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n2 00:26:38.887 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:26:38.887 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:26:38.887 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:38.887 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n2 00:26:38.887 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:26:38.887 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:26:39.145 No valid GPT data, bailing 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n2 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n3 ]] 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n3 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n3 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:26:39.145 No valid GPT data, bailing 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n3 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme1n1 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme1n1 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:39.145 No valid GPT data, bailing 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme1n1 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme1n1 ]] 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme1n1 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:39.145 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid=cb4c864e-bb30-4900-8fc1-989c4e76fc1b -a 10.0.0.1 -t tcp -s 4420 00:26:39.404 00:26:39.404 Discovery Log Number of Records 2, Generation counter 2 00:26:39.404 =====Discovery Log Entry 0====== 00:26:39.404 trtype: tcp 00:26:39.404 adrfam: ipv4 00:26:39.404 subtype: current discovery subsystem 00:26:39.404 treq: not specified, sq flow control disable supported 00:26:39.404 portid: 1 00:26:39.404 trsvcid: 4420 00:26:39.404 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:39.404 traddr: 10.0.0.1 00:26:39.404 eflags: none 00:26:39.404 sectype: none 00:26:39.404 =====Discovery Log Entry 1====== 00:26:39.404 trtype: tcp 00:26:39.404 adrfam: ipv4 00:26:39.404 subtype: nvme subsystem 00:26:39.404 treq: not specified, sq flow control disable supported 00:26:39.404 portid: 1 00:26:39.404 trsvcid: 4420 00:26:39.404 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:39.404 traddr: 10.0.0.1 00:26:39.404 eflags: none 00:26:39.404 sectype: none 00:26:39.404 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:39.404 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:39.405 ===================================================== 00:26:39.405 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:39.405 ===================================================== 00:26:39.405 Controller Capabilities/Features 00:26:39.405 ================================ 00:26:39.405 Vendor ID: 0000 00:26:39.405 Subsystem Vendor ID: 0000 00:26:39.405 Serial Number: a975b4504e10276f5c95 00:26:39.405 Model Number: Linux 00:26:39.405 Firmware Version: 6.8.9-20 00:26:39.405 Recommended Arb Burst: 0 00:26:39.405 IEEE OUI Identifier: 00 00 00 00:26:39.405 Multi-path I/O 00:26:39.405 May have multiple subsystem ports: No 00:26:39.405 May have multiple controllers: No 00:26:39.405 Associated with SR-IOV VF: No 00:26:39.405 Max Data Transfer Size: Unlimited 00:26:39.405 Max Number of Namespaces: 0 00:26:39.405 Max Number of I/O Queues: 1024 00:26:39.405 NVMe Specification Version (VS): 1.3 00:26:39.405 NVMe Specification Version (Identify): 1.3 00:26:39.405 Maximum Queue Entries: 1024 00:26:39.405 Contiguous Queues Required: No 00:26:39.405 Arbitration Mechanisms Supported 00:26:39.405 Weighted Round Robin: Not Supported 00:26:39.405 Vendor Specific: Not Supported 00:26:39.405 Reset Timeout: 7500 ms 00:26:39.405 Doorbell Stride: 4 bytes 00:26:39.405 NVM Subsystem Reset: Not Supported 00:26:39.405 Command Sets Supported 00:26:39.405 NVM Command Set: Supported 00:26:39.405 Boot Partition: Not Supported 00:26:39.405 Memory Page Size Minimum: 4096 bytes 00:26:39.405 Memory Page Size Maximum: 4096 bytes 00:26:39.405 Persistent Memory Region: Not Supported 00:26:39.405 Optional Asynchronous Events Supported 00:26:39.405 Namespace Attribute Notices: Not Supported 00:26:39.405 Firmware Activation Notices: Not Supported 00:26:39.405 ANA Change Notices: Not Supported 00:26:39.405 PLE Aggregate Log Change Notices: Not Supported 00:26:39.405 LBA Status Info Alert Notices: Not Supported 00:26:39.405 EGE Aggregate Log Change Notices: Not Supported 00:26:39.405 Normal NVM Subsystem Shutdown event: Not Supported 00:26:39.405 Zone Descriptor Change Notices: Not Supported 00:26:39.405 Discovery Log Change Notices: Supported 00:26:39.405 Controller Attributes 00:26:39.405 128-bit Host Identifier: Not Supported 00:26:39.405 Non-Operational Permissive Mode: Not Supported 00:26:39.405 NVM Sets: Not Supported 00:26:39.405 Read Recovery Levels: Not Supported 00:26:39.405 Endurance Groups: Not Supported 00:26:39.405 Predictable Latency Mode: Not Supported 00:26:39.405 Traffic Based Keep ALive: Not Supported 00:26:39.405 Namespace Granularity: Not Supported 00:26:39.405 SQ Associations: Not Supported 00:26:39.405 UUID List: Not Supported 00:26:39.405 Multi-Domain Subsystem: Not Supported 00:26:39.405 Fixed Capacity Management: Not Supported 00:26:39.405 Variable Capacity Management: Not Supported 00:26:39.405 Delete Endurance Group: Not Supported 00:26:39.405 Delete NVM Set: Not Supported 00:26:39.405 Extended LBA Formats Supported: Not Supported 00:26:39.405 Flexible Data Placement Supported: Not Supported 00:26:39.405 00:26:39.405 Controller Memory Buffer Support 00:26:39.405 ================================ 00:26:39.405 Supported: No 00:26:39.405 00:26:39.405 Persistent Memory Region Support 00:26:39.405 ================================ 00:26:39.405 Supported: No 00:26:39.405 00:26:39.405 Admin Command Set Attributes 00:26:39.405 ============================ 00:26:39.405 Security Send/Receive: Not Supported 00:26:39.405 Format NVM: Not Supported 00:26:39.405 Firmware Activate/Download: Not Supported 00:26:39.405 Namespace Management: Not Supported 00:26:39.405 Device Self-Test: Not Supported 00:26:39.405 Directives: Not Supported 00:26:39.405 NVMe-MI: Not Supported 00:26:39.405 Virtualization Management: Not Supported 00:26:39.405 Doorbell Buffer Config: Not Supported 00:26:39.405 Get LBA Status Capability: Not Supported 00:26:39.405 Command & Feature Lockdown Capability: Not Supported 00:26:39.405 Abort Command Limit: 1 00:26:39.405 Async Event Request Limit: 1 00:26:39.405 Number of Firmware Slots: N/A 00:26:39.405 Firmware Slot 1 Read-Only: N/A 00:26:39.405 Firmware Activation Without Reset: N/A 00:26:39.405 Multiple Update Detection Support: N/A 00:26:39.405 Firmware Update Granularity: No Information Provided 00:26:39.405 Per-Namespace SMART Log: No 00:26:39.405 Asymmetric Namespace Access Log Page: Not Supported 00:26:39.405 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:39.405 Command Effects Log Page: Not Supported 00:26:39.405 Get Log Page Extended Data: Supported 00:26:39.405 Telemetry Log Pages: Not Supported 00:26:39.405 Persistent Event Log Pages: Not Supported 00:26:39.405 Supported Log Pages Log Page: May Support 00:26:39.405 Commands Supported & Effects Log Page: Not Supported 00:26:39.405 Feature Identifiers & Effects Log Page:May Support 00:26:39.405 NVMe-MI Commands & Effects Log Page: May Support 00:26:39.405 Data Area 4 for Telemetry Log: Not Supported 00:26:39.405 Error Log Page Entries Supported: 1 00:26:39.405 Keep Alive: Not Supported 00:26:39.405 00:26:39.405 NVM Command Set Attributes 00:26:39.405 ========================== 00:26:39.405 Submission Queue Entry Size 00:26:39.405 Max: 1 00:26:39.405 Min: 1 00:26:39.405 Completion Queue Entry Size 00:26:39.405 Max: 1 00:26:39.405 Min: 1 00:26:39.405 Number of Namespaces: 0 00:26:39.405 Compare Command: Not Supported 00:26:39.405 Write Uncorrectable Command: Not Supported 00:26:39.405 Dataset Management Command: Not Supported 00:26:39.405 Write Zeroes Command: Not Supported 00:26:39.405 Set Features Save Field: Not Supported 00:26:39.405 Reservations: Not Supported 00:26:39.405 Timestamp: Not Supported 00:26:39.405 Copy: Not Supported 00:26:39.405 Volatile Write Cache: Not Present 00:26:39.405 Atomic Write Unit (Normal): 1 00:26:39.405 Atomic Write Unit (PFail): 1 00:26:39.405 Atomic Compare & Write Unit: 1 00:26:39.405 Fused Compare & Write: Not Supported 00:26:39.405 Scatter-Gather List 00:26:39.405 SGL Command Set: Supported 00:26:39.405 SGL Keyed: Not Supported 00:26:39.405 SGL Bit Bucket Descriptor: Not Supported 00:26:39.405 SGL Metadata Pointer: Not Supported 00:26:39.405 Oversized SGL: Not Supported 00:26:39.405 SGL Metadata Address: Not Supported 00:26:39.405 SGL Offset: Supported 00:26:39.405 Transport SGL Data Block: Not Supported 00:26:39.405 Replay Protected Memory Block: Not Supported 00:26:39.405 00:26:39.405 Firmware Slot Information 00:26:39.405 ========================= 00:26:39.405 Active slot: 0 00:26:39.405 00:26:39.405 00:26:39.405 Error Log 00:26:39.405 ========= 00:26:39.405 00:26:39.405 Active Namespaces 00:26:39.405 ================= 00:26:39.405 Discovery Log Page 00:26:39.405 ================== 00:26:39.405 Generation Counter: 2 00:26:39.405 Number of Records: 2 00:26:39.405 Record Format: 0 00:26:39.405 00:26:39.405 Discovery Log Entry 0 00:26:39.405 ---------------------- 00:26:39.405 Transport Type: 3 (TCP) 00:26:39.405 Address Family: 1 (IPv4) 00:26:39.405 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:39.405 Entry Flags: 00:26:39.405 Duplicate Returned Information: 0 00:26:39.405 Explicit Persistent Connection Support for Discovery: 0 00:26:39.405 Transport Requirements: 00:26:39.405 Secure Channel: Not Specified 00:26:39.405 Port ID: 1 (0x0001) 00:26:39.405 Controller ID: 65535 (0xffff) 00:26:39.405 Admin Max SQ Size: 32 00:26:39.405 Transport Service Identifier: 4420 00:26:39.405 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:39.405 Transport Address: 10.0.0.1 00:26:39.405 Discovery Log Entry 1 00:26:39.405 ---------------------- 00:26:39.405 Transport Type: 3 (TCP) 00:26:39.405 Address Family: 1 (IPv4) 00:26:39.405 Subsystem Type: 2 (NVM Subsystem) 00:26:39.405 Entry Flags: 00:26:39.405 Duplicate Returned Information: 0 00:26:39.405 Explicit Persistent Connection Support for Discovery: 0 00:26:39.405 Transport Requirements: 00:26:39.405 Secure Channel: Not Specified 00:26:39.405 Port ID: 1 (0x0001) 00:26:39.405 Controller ID: 65535 (0xffff) 00:26:39.405 Admin Max SQ Size: 32 00:26:39.405 Transport Service Identifier: 4420 00:26:39.405 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:39.405 Transport Address: 10.0.0.1 00:26:39.405 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:39.664 get_feature(0x01) failed 00:26:39.664 get_feature(0x02) failed 00:26:39.664 get_feature(0x04) failed 00:26:39.664 ===================================================== 00:26:39.664 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:39.664 ===================================================== 00:26:39.664 Controller Capabilities/Features 00:26:39.664 ================================ 00:26:39.664 Vendor ID: 0000 00:26:39.664 Subsystem Vendor ID: 0000 00:26:39.664 Serial Number: cc75fdd87dfb20dcd842 00:26:39.664 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:39.664 Firmware Version: 6.8.9-20 00:26:39.664 Recommended Arb Burst: 6 00:26:39.664 IEEE OUI Identifier: 00 00 00 00:26:39.664 Multi-path I/O 00:26:39.664 May have multiple subsystem ports: Yes 00:26:39.664 May have multiple controllers: Yes 00:26:39.664 Associated with SR-IOV VF: No 00:26:39.664 Max Data Transfer Size: Unlimited 00:26:39.664 Max Number of Namespaces: 1024 00:26:39.664 Max Number of I/O Queues: 128 00:26:39.664 NVMe Specification Version (VS): 1.3 00:26:39.664 NVMe Specification Version (Identify): 1.3 00:26:39.664 Maximum Queue Entries: 1024 00:26:39.664 Contiguous Queues Required: No 00:26:39.664 Arbitration Mechanisms Supported 00:26:39.664 Weighted Round Robin: Not Supported 00:26:39.664 Vendor Specific: Not Supported 00:26:39.664 Reset Timeout: 7500 ms 00:26:39.664 Doorbell Stride: 4 bytes 00:26:39.664 NVM Subsystem Reset: Not Supported 00:26:39.664 Command Sets Supported 00:26:39.664 NVM Command Set: Supported 00:26:39.664 Boot Partition: Not Supported 00:26:39.664 Memory Page Size Minimum: 4096 bytes 00:26:39.664 Memory Page Size Maximum: 4096 bytes 00:26:39.664 Persistent Memory Region: Not Supported 00:26:39.664 Optional Asynchronous Events Supported 00:26:39.664 Namespace Attribute Notices: Supported 00:26:39.664 Firmware Activation Notices: Not Supported 00:26:39.664 ANA Change Notices: Supported 00:26:39.664 PLE Aggregate Log Change Notices: Not Supported 00:26:39.664 LBA Status Info Alert Notices: Not Supported 00:26:39.664 EGE Aggregate Log Change Notices: Not Supported 00:26:39.664 Normal NVM Subsystem Shutdown event: Not Supported 00:26:39.664 Zone Descriptor Change Notices: Not Supported 00:26:39.664 Discovery Log Change Notices: Not Supported 00:26:39.664 Controller Attributes 00:26:39.664 128-bit Host Identifier: Supported 00:26:39.664 Non-Operational Permissive Mode: Not Supported 00:26:39.664 NVM Sets: Not Supported 00:26:39.664 Read Recovery Levels: Not Supported 00:26:39.664 Endurance Groups: Not Supported 00:26:39.664 Predictable Latency Mode: Not Supported 00:26:39.664 Traffic Based Keep ALive: Supported 00:26:39.664 Namespace Granularity: Not Supported 00:26:39.664 SQ Associations: Not Supported 00:26:39.664 UUID List: Not Supported 00:26:39.664 Multi-Domain Subsystem: Not Supported 00:26:39.664 Fixed Capacity Management: Not Supported 00:26:39.665 Variable Capacity Management: Not Supported 00:26:39.665 Delete Endurance Group: Not Supported 00:26:39.665 Delete NVM Set: Not Supported 00:26:39.665 Extended LBA Formats Supported: Not Supported 00:26:39.665 Flexible Data Placement Supported: Not Supported 00:26:39.665 00:26:39.665 Controller Memory Buffer Support 00:26:39.665 ================================ 00:26:39.665 Supported: No 00:26:39.665 00:26:39.665 Persistent Memory Region Support 00:26:39.665 ================================ 00:26:39.665 Supported: No 00:26:39.665 00:26:39.665 Admin Command Set Attributes 00:26:39.665 ============================ 00:26:39.665 Security Send/Receive: Not Supported 00:26:39.665 Format NVM: Not Supported 00:26:39.665 Firmware Activate/Download: Not Supported 00:26:39.665 Namespace Management: Not Supported 00:26:39.665 Device Self-Test: Not Supported 00:26:39.665 Directives: Not Supported 00:26:39.665 NVMe-MI: Not Supported 00:26:39.665 Virtualization Management: Not Supported 00:26:39.665 Doorbell Buffer Config: Not Supported 00:26:39.665 Get LBA Status Capability: Not Supported 00:26:39.665 Command & Feature Lockdown Capability: Not Supported 00:26:39.665 Abort Command Limit: 4 00:26:39.665 Async Event Request Limit: 4 00:26:39.665 Number of Firmware Slots: N/A 00:26:39.665 Firmware Slot 1 Read-Only: N/A 00:26:39.665 Firmware Activation Without Reset: N/A 00:26:39.665 Multiple Update Detection Support: N/A 00:26:39.665 Firmware Update Granularity: No Information Provided 00:26:39.665 Per-Namespace SMART Log: Yes 00:26:39.665 Asymmetric Namespace Access Log Page: Supported 00:26:39.665 ANA Transition Time : 10 sec 00:26:39.665 00:26:39.665 Asymmetric Namespace Access Capabilities 00:26:39.665 ANA Optimized State : Supported 00:26:39.665 ANA Non-Optimized State : Supported 00:26:39.665 ANA Inaccessible State : Supported 00:26:39.665 ANA Persistent Loss State : Supported 00:26:39.665 ANA Change State : Supported 00:26:39.665 ANAGRPID is not changed : No 00:26:39.665 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:39.665 00:26:39.665 ANA Group Identifier Maximum : 128 00:26:39.665 Number of ANA Group Identifiers : 128 00:26:39.665 Max Number of Allowed Namespaces : 1024 00:26:39.665 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:39.665 Command Effects Log Page: Supported 00:26:39.665 Get Log Page Extended Data: Supported 00:26:39.665 Telemetry Log Pages: Not Supported 00:26:39.665 Persistent Event Log Pages: Not Supported 00:26:39.665 Supported Log Pages Log Page: May Support 00:26:39.665 Commands Supported & Effects Log Page: Not Supported 00:26:39.665 Feature Identifiers & Effects Log Page:May Support 00:26:39.665 NVMe-MI Commands & Effects Log Page: May Support 00:26:39.665 Data Area 4 for Telemetry Log: Not Supported 00:26:39.665 Error Log Page Entries Supported: 128 00:26:39.665 Keep Alive: Supported 00:26:39.665 Keep Alive Granularity: 1000 ms 00:26:39.665 00:26:39.665 NVM Command Set Attributes 00:26:39.665 ========================== 00:26:39.665 Submission Queue Entry Size 00:26:39.665 Max: 64 00:26:39.665 Min: 64 00:26:39.665 Completion Queue Entry Size 00:26:39.665 Max: 16 00:26:39.665 Min: 16 00:26:39.665 Number of Namespaces: 1024 00:26:39.665 Compare Command: Not Supported 00:26:39.665 Write Uncorrectable Command: Not Supported 00:26:39.665 Dataset Management Command: Supported 00:26:39.665 Write Zeroes Command: Supported 00:26:39.665 Set Features Save Field: Not Supported 00:26:39.665 Reservations: Not Supported 00:26:39.665 Timestamp: Not Supported 00:26:39.665 Copy: Not Supported 00:26:39.665 Volatile Write Cache: Present 00:26:39.665 Atomic Write Unit (Normal): 1 00:26:39.665 Atomic Write Unit (PFail): 1 00:26:39.665 Atomic Compare & Write Unit: 1 00:26:39.665 Fused Compare & Write: Not Supported 00:26:39.665 Scatter-Gather List 00:26:39.665 SGL Command Set: Supported 00:26:39.665 SGL Keyed: Not Supported 00:26:39.665 SGL Bit Bucket Descriptor: Not Supported 00:26:39.665 SGL Metadata Pointer: Not Supported 00:26:39.665 Oversized SGL: Not Supported 00:26:39.665 SGL Metadata Address: Not Supported 00:26:39.665 SGL Offset: Supported 00:26:39.665 Transport SGL Data Block: Not Supported 00:26:39.665 Replay Protected Memory Block: Not Supported 00:26:39.665 00:26:39.665 Firmware Slot Information 00:26:39.665 ========================= 00:26:39.665 Active slot: 0 00:26:39.665 00:26:39.665 Asymmetric Namespace Access 00:26:39.665 =========================== 00:26:39.665 Change Count : 0 00:26:39.665 Number of ANA Group Descriptors : 1 00:26:39.665 ANA Group Descriptor : 0 00:26:39.665 ANA Group ID : 1 00:26:39.665 Number of NSID Values : 1 00:26:39.665 Change Count : 0 00:26:39.665 ANA State : 1 00:26:39.665 Namespace Identifier : 1 00:26:39.665 00:26:39.665 Commands Supported and Effects 00:26:39.665 ============================== 00:26:39.665 Admin Commands 00:26:39.665 -------------- 00:26:39.665 Get Log Page (02h): Supported 00:26:39.665 Identify (06h): Supported 00:26:39.665 Abort (08h): Supported 00:26:39.665 Set Features (09h): Supported 00:26:39.665 Get Features (0Ah): Supported 00:26:39.665 Asynchronous Event Request (0Ch): Supported 00:26:39.665 Keep Alive (18h): Supported 00:26:39.665 I/O Commands 00:26:39.665 ------------ 00:26:39.665 Flush (00h): Supported 00:26:39.665 Write (01h): Supported LBA-Change 00:26:39.665 Read (02h): Supported 00:26:39.665 Write Zeroes (08h): Supported LBA-Change 00:26:39.665 Dataset Management (09h): Supported 00:26:39.665 00:26:39.665 Error Log 00:26:39.665 ========= 00:26:39.665 Entry: 0 00:26:39.665 Error Count: 0x3 00:26:39.665 Submission Queue Id: 0x0 00:26:39.665 Command Id: 0x5 00:26:39.665 Phase Bit: 0 00:26:39.665 Status Code: 0x2 00:26:39.665 Status Code Type: 0x0 00:26:39.665 Do Not Retry: 1 00:26:39.665 Error Location: 0x28 00:26:39.665 LBA: 0x0 00:26:39.665 Namespace: 0x0 00:26:39.665 Vendor Log Page: 0x0 00:26:39.665 ----------- 00:26:39.665 Entry: 1 00:26:39.665 Error Count: 0x2 00:26:39.665 Submission Queue Id: 0x0 00:26:39.665 Command Id: 0x5 00:26:39.665 Phase Bit: 0 00:26:39.665 Status Code: 0x2 00:26:39.665 Status Code Type: 0x0 00:26:39.665 Do Not Retry: 1 00:26:39.665 Error Location: 0x28 00:26:39.665 LBA: 0x0 00:26:39.665 Namespace: 0x0 00:26:39.665 Vendor Log Page: 0x0 00:26:39.665 ----------- 00:26:39.665 Entry: 2 00:26:39.665 Error Count: 0x1 00:26:39.665 Submission Queue Id: 0x0 00:26:39.665 Command Id: 0x4 00:26:39.665 Phase Bit: 0 00:26:39.665 Status Code: 0x2 00:26:39.665 Status Code Type: 0x0 00:26:39.665 Do Not Retry: 1 00:26:39.665 Error Location: 0x28 00:26:39.665 LBA: 0x0 00:26:39.665 Namespace: 0x0 00:26:39.665 Vendor Log Page: 0x0 00:26:39.665 00:26:39.665 Number of Queues 00:26:39.665 ================ 00:26:39.665 Number of I/O Submission Queues: 128 00:26:39.665 Number of I/O Completion Queues: 128 00:26:39.665 00:26:39.665 ZNS Specific Controller Data 00:26:39.665 ============================ 00:26:39.665 Zone Append Size Limit: 0 00:26:39.665 00:26:39.665 00:26:39.665 Active Namespaces 00:26:39.665 ================= 00:26:39.665 get_feature(0x05) failed 00:26:39.665 Namespace ID:1 00:26:39.665 Command Set Identifier: NVM (00h) 00:26:39.665 Deallocate: Supported 00:26:39.665 Deallocated/Unwritten Error: Not Supported 00:26:39.665 Deallocated Read Value: Unknown 00:26:39.665 Deallocate in Write Zeroes: Not Supported 00:26:39.665 Deallocated Guard Field: 0xFFFF 00:26:39.665 Flush: Supported 00:26:39.665 Reservation: Not Supported 00:26:39.665 Namespace Sharing Capabilities: Multiple Controllers 00:26:39.665 Size (in LBAs): 1310720 (5GiB) 00:26:39.665 Capacity (in LBAs): 1310720 (5GiB) 00:26:39.665 Utilization (in LBAs): 1310720 (5GiB) 00:26:39.665 UUID: bdde90ac-d7fe-4e4b-a672-ce09361ecdd5 00:26:39.665 Thin Provisioning: Not Supported 00:26:39.665 Per-NS Atomic Units: Yes 00:26:39.665 Atomic Boundary Size (Normal): 0 00:26:39.665 Atomic Boundary Size (PFail): 0 00:26:39.665 Atomic Boundary Offset: 0 00:26:39.665 NGUID/EUI64 Never Reused: No 00:26:39.665 ANA group ID: 1 00:26:39.665 Namespace Write Protected: No 00:26:39.665 Number of LBA Formats: 1 00:26:39.665 Current LBA Format: LBA Format #00 00:26:39.665 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:26:39.665 00:26:39.665 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:39.665 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:39.665 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:39.665 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:39.665 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:39.665 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:39.665 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:39.666 rmmod nvme_tcp 00:26:39.666 rmmod nvme_fabrics 00:26:39.666 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:39.666 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:39.666 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:39.666 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:26:39.666 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:39.666 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:39.666 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:39.666 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:39.666 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:26:39.666 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:39.666 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:26:39.666 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:39.666 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:39.666 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:39.666 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:39.666 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:39.923 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:39.923 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:39.923 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:39.923 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:39.923 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:39.923 19:28:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:39.923 19:28:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:39.923 19:28:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:39.923 19:28:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:39.923 19:28:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:39.923 19:28:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:39.923 19:28:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.923 19:28:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:39.923 19:28:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.923 19:28:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:26:39.923 19:28:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:39.923 19:28:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:39.923 19:28:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:26:39.923 19:28:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:39.923 19:28:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:39.923 19:28:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:39.923 19:28:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:39.923 19:28:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:26:39.923 19:28:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:26:40.181 19:28:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:40.746 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:40.746 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:41.005 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:41.005 00:26:41.005 real 0m3.362s 00:26:41.005 user 0m1.198s 00:26:41.005 sys 0m1.568s 00:26:41.005 19:28:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:41.005 19:28:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:41.005 ************************************ 00:26:41.005 END TEST nvmf_identify_kernel_target 00:26:41.005 ************************************ 00:26:41.005 19:28:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:41.005 19:28:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:41.005 19:28:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:41.005 19:28:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.005 ************************************ 00:26:41.005 START TEST nvmf_auth_host 00:26:41.005 ************************************ 00:26:41.005 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:41.005 * Looking for test storage... 00:26:41.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:41.005 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:41.005 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:26:41.005 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:41.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.264 --rc genhtml_branch_coverage=1 00:26:41.264 --rc genhtml_function_coverage=1 00:26:41.264 --rc genhtml_legend=1 00:26:41.264 --rc geninfo_all_blocks=1 00:26:41.264 --rc geninfo_unexecuted_blocks=1 00:26:41.264 00:26:41.264 ' 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:41.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.264 --rc genhtml_branch_coverage=1 00:26:41.264 --rc genhtml_function_coverage=1 00:26:41.264 --rc genhtml_legend=1 00:26:41.264 --rc geninfo_all_blocks=1 00:26:41.264 --rc geninfo_unexecuted_blocks=1 00:26:41.264 00:26:41.264 ' 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:41.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.264 --rc genhtml_branch_coverage=1 00:26:41.264 --rc genhtml_function_coverage=1 00:26:41.264 --rc genhtml_legend=1 00:26:41.264 --rc geninfo_all_blocks=1 00:26:41.264 --rc geninfo_unexecuted_blocks=1 00:26:41.264 00:26:41.264 ' 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:41.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.264 --rc genhtml_branch_coverage=1 00:26:41.264 --rc genhtml_function_coverage=1 00:26:41.264 --rc genhtml_legend=1 00:26:41.264 --rc geninfo_all_blocks=1 00:26:41.264 --rc geninfo_unexecuted_blocks=1 00:26:41.264 00:26:41.264 ' 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.264 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:41.265 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # nvmf_veth_init 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:41.265 Cannot find device "nvmf_init_br" 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:41.265 Cannot find device "nvmf_init_br2" 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:41.265 Cannot find device "nvmf_tgt_br" 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:41.265 Cannot find device "nvmf_tgt_br2" 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:41.265 Cannot find device "nvmf_init_br" 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:41.265 Cannot find device "nvmf_init_br2" 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:41.265 Cannot find device "nvmf_tgt_br" 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:41.265 Cannot find device "nvmf_tgt_br2" 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:41.265 Cannot find device "nvmf_br" 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:41.265 Cannot find device "nvmf_init_if" 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:41.265 Cannot find device "nvmf_init_if2" 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:41.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:41.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:41.265 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:41.523 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:41.523 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:41.523 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:41.523 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:41.523 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:41.523 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:41.523 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:41.523 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:41.523 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:41.523 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:41.523 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:41.523 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:41.523 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:41.523 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:41.523 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:41.523 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:41.523 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:41.523 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:41.523 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:41.523 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:41.523 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:41.523 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:41.523 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:41.523 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:41.524 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:41.524 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:41.524 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:41.524 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:41.524 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:41.524 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:41.524 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:41.782 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:41.782 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:26:41.782 00:26:41.782 --- 10.0.0.3 ping statistics --- 00:26:41.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.782 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:26:41.782 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:41.782 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:41.782 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:26:41.782 00:26:41.782 --- 10.0.0.4 ping statistics --- 00:26:41.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.782 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:26:41.782 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:41.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:41.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:26:41.782 00:26:41.782 --- 10.0.0.1 ping statistics --- 00:26:41.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.782 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:26:41.782 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:41.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:41.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:26:41.782 00:26:41.782 --- 10.0.0.2 ping statistics --- 00:26:41.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.782 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:26:41.782 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:41.782 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # return 0 00:26:41.782 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:41.782 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:41.782 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:41.782 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:41.782 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:41.782 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:41.782 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:41.782 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:41.782 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:41.782 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:41.782 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.782 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=78453 00:26:41.782 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:41.782 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 78453 00:26:41.782 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 78453 ']' 00:26:41.782 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.782 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:41.782 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.782 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:41.782 19:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.041 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:42.041 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:26:42.041 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:42.041 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:42.041 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.041 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:42.041 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:42.041 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:42.041 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:42.041 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:42.041 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:42.041 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:42.041 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:42.041 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:42.041 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=37e0027c0444e14a97fd16767b1e6bda 00:26:42.041 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:42.041 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.5aX 00:26:42.041 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 37e0027c0444e14a97fd16767b1e6bda 0 00:26:42.041 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 37e0027c0444e14a97fd16767b1e6bda 0 00:26:42.041 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:42.041 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:42.041 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=37e0027c0444e14a97fd16767b1e6bda 00:26:42.041 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:42.041 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.5aX 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.5aX 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.5aX 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=a8c81aae7c7c3d46ae6e4f59a7eb837302a45224e4ccb67a26b338d6efe73bcb 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.xJs 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key a8c81aae7c7c3d46ae6e4f59a7eb837302a45224e4ccb67a26b338d6efe73bcb 3 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 a8c81aae7c7c3d46ae6e4f59a7eb837302a45224e4ccb67a26b338d6efe73bcb 3 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=a8c81aae7c7c3d46ae6e4f59a7eb837302a45224e4ccb67a26b338d6efe73bcb 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.xJs 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.xJs 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.xJs 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=b3db0dc48d57c009480e8094d37e77da5e4cc85b31a10503 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.3OY 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key b3db0dc48d57c009480e8094d37e77da5e4cc85b31a10503 0 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 b3db0dc48d57c009480e8094d37e77da5e4cc85b31a10503 0 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=b3db0dc48d57c009480e8094d37e77da5e4cc85b31a10503 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.3OY 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.3OY 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.3OY 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=c5aa2897b5cd44de929ae9eec6dc1703d5f34735181c6d7b 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.smi 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key c5aa2897b5cd44de929ae9eec6dc1703d5f34735181c6d7b 2 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 c5aa2897b5cd44de929ae9eec6dc1703d5f34735181c6d7b 2 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=c5aa2897b5cd44de929ae9eec6dc1703d5f34735181c6d7b 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.smi 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.smi 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.smi 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=22ac868f94e1de4f1a8451c1971fa42e 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.Cfg 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 22ac868f94e1de4f1a8451c1971fa42e 1 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 22ac868f94e1de4f1a8451c1971fa42e 1 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=22ac868f94e1de4f1a8451c1971fa42e 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:42.337 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.Cfg 00:26:42.338 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.Cfg 00:26:42.338 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Cfg 00:26:42.338 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:42.338 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:42.338 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:42.338 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:42.338 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:26:42.338 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:42.338 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:42.338 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=40ea8b18d4ff7a95478d3df247154a94 00:26:42.338 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:26:42.338 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.4ub 00:26:42.338 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 40ea8b18d4ff7a95478d3df247154a94 1 00:26:42.338 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 40ea8b18d4ff7a95478d3df247154a94 1 00:26:42.338 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:42.338 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:42.338 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=40ea8b18d4ff7a95478d3df247154a94 00:26:42.338 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:26:42.338 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:42.596 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.4ub 00:26:42.596 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.4ub 00:26:42.596 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.4ub 00:26:42.596 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:42.596 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:42.596 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:42.596 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:42.596 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:26:42.596 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:42.596 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:42.596 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=3dfb14e4a792f01102ff2396bfe37cc1f00f4efdc70af2f2 00:26:42.596 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.Ba0 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 3dfb14e4a792f01102ff2396bfe37cc1f00f4efdc70af2f2 2 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 3dfb14e4a792f01102ff2396bfe37cc1f00f4efdc70af2f2 2 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=3dfb14e4a792f01102ff2396bfe37cc1f00f4efdc70af2f2 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.Ba0 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.Ba0 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Ba0 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=968153e30a0f5a7591ebae3c7159c4c6 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.OYr 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 968153e30a0f5a7591ebae3c7159c4c6 0 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 968153e30a0f5a7591ebae3c7159c4c6 0 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=968153e30a0f5a7591ebae3c7159c4c6 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.OYr 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.OYr 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.OYr 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=a8f9848dca79a8978acc5b0614bb9c8260a89dacee20c44168f97a359cdaac5c 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.j8V 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key a8f9848dca79a8978acc5b0614bb9c8260a89dacee20c44168f97a359cdaac5c 3 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 a8f9848dca79a8978acc5b0614bb9c8260a89dacee20c44168f97a359cdaac5c 3 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=a8f9848dca79a8978acc5b0614bb9c8260a89dacee20c44168f97a359cdaac5c 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:42.597 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.j8V 00:26:42.855 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.j8V 00:26:42.855 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.j8V 00:26:42.855 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:42.855 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78453 00:26:42.855 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 78453 ']' 00:26:42.855 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.855 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:42.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:42.855 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:42.855 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:42.855 19:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.113 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:43.113 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:26:43.113 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5aX 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.xJs ]] 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xJs 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.3OY 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.smi ]] 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.smi 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Cfg 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.4ub ]] 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4ub 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Ba0 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.OYr ]] 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.OYr 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.j8V 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:43.114 19:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:43.372 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:43.630 Waiting for block devices as requested 00:26:43.630 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:43.630 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:44.195 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:26:44.195 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:44.195 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:26:44.195 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:44.195 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:44.195 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:44.195 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:26:44.195 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:44.195 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:44.195 No valid GPT data, bailing 00:26:44.195 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:44.195 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:44.195 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:44.195 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:26:44.195 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:26:44.195 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n2 ]] 00:26:44.195 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n2 00:26:44.195 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:26:44.195 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:26:44.195 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:44.195 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n2 00:26:44.195 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:26:44.195 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:26:44.454 No valid GPT data, bailing 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n2 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n3 ]] 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n3 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n3 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:26:44.454 No valid GPT data, bailing 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n3 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme1n1 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme1n1 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:44.454 No valid GPT data, bailing 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme1n1 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme1n1 ]] 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme1n1 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid=cb4c864e-bb30-4900-8fc1-989c4e76fc1b -a 10.0.0.1 -t tcp -s 4420 00:26:44.454 00:26:44.454 Discovery Log Number of Records 2, Generation counter 2 00:26:44.454 =====Discovery Log Entry 0====== 00:26:44.454 trtype: tcp 00:26:44.454 adrfam: ipv4 00:26:44.454 subtype: current discovery subsystem 00:26:44.454 treq: not specified, sq flow control disable supported 00:26:44.454 portid: 1 00:26:44.454 trsvcid: 4420 00:26:44.454 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:44.454 traddr: 10.0.0.1 00:26:44.454 eflags: none 00:26:44.454 sectype: none 00:26:44.454 =====Discovery Log Entry 1====== 00:26:44.454 trtype: tcp 00:26:44.454 adrfam: ipv4 00:26:44.454 subtype: nvme subsystem 00:26:44.454 treq: not specified, sq flow control disable supported 00:26:44.454 portid: 1 00:26:44.454 trsvcid: 4420 00:26:44.454 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:44.454 traddr: 10.0.0.1 00:26:44.454 eflags: none 00:26:44.454 sectype: none 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:44.454 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:44.712 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:44.712 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.712 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:44.712 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:44.712 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:44.712 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:26:44.712 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:26:44.712 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:44.712 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: ]] 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.713 nvme0n1 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.713 19:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: ]] 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.972 nvme0n1 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: ]] 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.972 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.231 nvme0n1 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: ]] 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.231 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.489 nvme0n1 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: ]] 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.489 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:45.490 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.490 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.490 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.490 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.490 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:45.490 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:45.490 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:45.490 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.490 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.490 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:45.490 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.490 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:45.490 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:45.490 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:45.490 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:45.490 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.490 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.490 nvme0n1 00:26:45.490 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.490 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.490 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.490 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.490 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.490 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.490 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.490 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.490 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.490 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.748 nvme0n1 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:45.748 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:45.749 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:26:45.749 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:26:45.749 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:45.749 19:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:46.007 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:26:46.007 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: ]] 00:26:46.007 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:26:46.007 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:46.007 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.007 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:46.007 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:46.007 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:46.007 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.007 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:46.007 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.007 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.007 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.007 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.007 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:46.007 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:46.007 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:46.007 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.007 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.007 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:46.007 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.007 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:46.007 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:46.007 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:46.007 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:46.007 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.007 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.266 nvme0n1 00:26:46.266 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.266 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.266 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.266 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.266 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.266 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.266 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.266 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.266 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.266 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.266 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.266 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.266 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:46.266 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.266 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:46.266 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:46.266 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:46.266 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:26:46.266 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:26:46.266 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:46.266 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:46.266 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:26:46.266 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: ]] 00:26:46.266 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:26:46.266 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:46.266 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.267 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:46.267 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:46.267 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:46.267 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.267 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:46.267 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.267 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.267 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.267 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.267 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:46.267 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:46.267 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:46.267 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.267 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.267 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:46.267 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.267 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:46.267 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:46.267 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:46.267 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:46.267 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.267 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.526 nvme0n1 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: ]] 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.526 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.784 nvme0n1 00:26:46.784 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.784 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.784 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.784 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.784 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.784 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.784 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.784 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.784 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.784 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.784 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.784 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.784 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:46.784 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.784 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:46.784 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:46.784 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:46.784 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:26:46.784 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:26:46.784 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:46.784 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:46.784 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:26:46.784 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: ]] 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.785 nvme0n1 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.785 19:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.785 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.043 nvme0n1 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.043 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.044 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.044 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.044 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.044 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.044 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.044 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.044 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.044 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:47.044 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.044 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:47.044 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.044 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:47.044 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:47.044 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:47.044 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:26:47.044 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:26:47.044 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:47.044 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:47.978 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:26:47.978 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: ]] 00:26:47.978 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:26:47.978 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:47.978 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.978 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:47.978 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:47.978 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:47.978 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.978 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:47.978 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.978 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.978 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.978 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.978 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:47.978 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:47.978 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:47.978 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.978 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.978 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:47.978 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.978 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:47.978 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:47.978 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:47.978 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:47.978 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.978 19:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.978 nvme0n1 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: ]] 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.978 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.236 nvme0n1 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: ]] 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.236 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.494 nvme0n1 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: ]] 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.494 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.752 nvme0n1 00:26:48.752 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.752 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.752 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.752 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.752 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.752 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.752 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.752 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.753 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.753 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.753 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.753 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.753 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:48.753 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.753 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:48.753 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:48.753 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:48.753 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:26:48.753 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:48.753 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:48.753 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:48.753 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:26:48.753 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:48.753 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:48.753 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.753 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:48.753 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:48.753 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:48.753 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.753 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:48.753 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.753 19:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.753 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.753 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.753 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:48.753 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:48.753 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:48.753 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.753 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.011 nvme0n1 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:49.011 19:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:50.913 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:26:50.913 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: ]] 00:26:50.913 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:26:50.913 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:50.913 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.913 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:50.913 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:50.914 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:50.914 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.914 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:50.914 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.914 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.914 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.914 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.914 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:50.914 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:50.914 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:50.914 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.914 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.914 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:50.914 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.914 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:50.914 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:50.914 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:50.914 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:50.914 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.914 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.480 nvme0n1 00:26:51.480 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.480 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.480 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.480 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.480 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.480 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.480 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.480 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.480 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.480 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.480 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.480 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.480 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:51.480 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.480 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:51.480 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:51.480 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:51.480 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:26:51.480 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:26:51.480 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:51.481 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:51.481 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:26:51.481 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: ]] 00:26:51.481 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:26:51.481 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:51.481 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.481 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:51.481 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:51.481 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:51.481 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.481 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:51.481 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.481 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.481 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.481 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.481 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:51.481 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:51.481 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:51.481 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.481 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.481 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:51.481 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.481 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:51.481 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:51.481 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:51.481 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:51.481 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.481 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.739 nvme0n1 00:26:51.739 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.739 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.739 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.739 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.739 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.739 19:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.997 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.997 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.997 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.997 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.997 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.997 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.997 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: ]] 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.998 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.256 nvme0n1 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: ]] 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:52.256 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:52.257 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:52.257 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.257 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.257 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:52.257 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.257 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:52.257 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:52.257 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:52.257 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:52.257 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.257 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.823 nvme0n1 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.823 19:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.081 nvme0n1 00:26:53.081 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.081 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.081 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.081 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.081 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.081 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.339 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.339 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.339 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.339 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.339 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.339 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:53.339 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.339 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:53.339 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.339 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:53.339 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:53.339 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:53.339 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:26:53.339 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:26:53.339 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:53.339 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:53.339 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:26:53.339 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: ]] 00:26:53.339 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:26:53.339 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:53.339 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.339 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:53.340 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:53.340 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:53.340 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.340 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:53.340 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.340 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.340 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.340 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.340 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:53.340 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:53.340 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:53.340 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.340 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.340 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:53.340 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.340 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:53.340 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:53.340 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:53.340 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:53.340 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.340 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.906 nvme0n1 00:26:53.906 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.906 19:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: ]] 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.906 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.473 nvme0n1 00:26:54.473 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.473 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.473 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.473 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.473 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.473 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.731 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.731 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.731 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.731 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.731 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.731 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.731 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:54.731 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.731 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:54.731 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:54.731 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:54.731 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:26:54.731 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:26:54.731 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:54.731 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:54.731 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:26:54.731 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: ]] 00:26:54.731 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:26:54.731 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:54.731 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.732 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:54.732 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:54.732 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:54.732 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.732 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:54.732 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.732 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.732 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.732 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.732 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:54.732 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:54.732 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:54.732 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.732 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.732 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:54.732 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.732 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:54.732 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:54.732 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:54.732 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:54.732 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.732 19:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.298 nvme0n1 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: ]] 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.298 19:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.233 nvme0n1 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.233 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.800 nvme0n1 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: ]] 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.800 19:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.059 nvme0n1 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: ]] 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:57.059 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.060 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:57.060 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.060 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.060 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.060 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.060 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:57.060 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:57.060 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:57.060 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.060 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.060 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:57.060 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.060 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:57.060 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:57.060 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:57.060 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:57.060 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.060 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.060 nvme0n1 00:26:57.060 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.060 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.060 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.060 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.060 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.060 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.060 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.060 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.060 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.060 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: ]] 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.318 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.319 nvme0n1 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: ]] 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.319 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.578 nvme0n1 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.578 nvme0n1 00:26:57.578 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: ]] 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.837 19:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.837 nvme0n1 00:26:57.837 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.837 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.837 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.837 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.837 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.837 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: ]] 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.104 nvme0n1 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: ]] 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.104 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.105 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:58.105 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:58.105 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:58.105 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.105 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.105 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:58.105 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.105 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:58.105 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:58.105 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:58.105 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:58.105 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.105 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.363 nvme0n1 00:26:58.363 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.363 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.363 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.363 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.363 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.363 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.363 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.363 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.363 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.363 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.363 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.363 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: ]] 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.364 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.622 nvme0n1 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.622 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.881 nvme0n1 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: ]] 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:58.881 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:58.882 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:58.882 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.882 19:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.140 nvme0n1 00:26:59.140 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.140 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.140 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.140 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.140 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.140 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.140 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.140 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.140 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.140 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.140 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.140 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.140 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:59.140 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.140 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:59.140 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:59.140 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:59.140 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:26:59.140 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:26:59.140 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:59.140 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:59.140 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:26:59.140 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: ]] 00:26:59.141 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:26:59.141 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:59.141 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.141 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:59.141 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:59.141 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:59.141 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.141 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:59.141 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.141 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.141 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.141 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.141 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:59.141 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:59.141 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:59.141 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.141 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.141 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:59.141 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.141 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:59.141 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:59.141 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:59.141 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:59.141 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.141 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.400 nvme0n1 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: ]] 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.400 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.659 nvme0n1 00:26:59.659 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.659 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.659 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.659 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.659 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.659 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.659 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.659 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.659 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.659 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.659 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.659 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.659 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:59.659 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.659 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:59.659 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:59.659 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:59.659 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:26:59.659 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:26:59.659 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:59.659 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:59.659 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:26:59.659 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: ]] 00:26:59.659 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:26:59.659 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:59.659 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.659 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:59.659 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:59.659 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:59.660 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.660 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:59.660 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.660 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.660 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.660 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.660 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:59.660 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:59.660 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:59.660 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.660 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.660 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:59.660 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.660 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:59.660 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:59.660 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:59.660 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:59.660 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.660 19:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.919 nvme0n1 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.919 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.177 nvme0n1 00:27:00.177 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.177 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.177 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.177 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.177 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.177 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.177 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.177 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.177 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.177 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: ]] 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.435 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.692 nvme0n1 00:27:00.692 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.692 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.692 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: ]] 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.693 19:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.260 nvme0n1 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: ]] 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.260 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.518 nvme0n1 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: ]] 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.518 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.775 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.775 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.775 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:01.775 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:01.775 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:01.775 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.775 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.775 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:01.775 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.775 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:01.775 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:01.775 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:01.775 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:01.775 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.775 19:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.033 nvme0n1 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.033 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.600 nvme0n1 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: ]] 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.600 19:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.166 nvme0n1 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: ]] 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.166 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.738 nvme0n1 00:27:03.738 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.738 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.738 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.738 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.738 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.738 19:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: ]] 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.996 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.562 nvme0n1 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: ]] 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:04.562 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.563 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:04.563 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:04.563 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:04.563 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:04.563 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.563 19:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.496 nvme0n1 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.496 19:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.063 nvme0n1 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: ]] 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.063 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:06.064 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.064 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:06.064 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:06.064 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:06.064 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:06.064 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.064 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.064 nvme0n1 00:27:06.064 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.064 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.064 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.064 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.064 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.064 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: ]] 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.322 nvme0n1 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.322 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: ]] 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.323 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.633 nvme0n1 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: ]] 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:06.633 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:06.634 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:06.634 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.634 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.634 nvme0n1 00:27:06.634 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.634 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.634 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.634 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.634 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.634 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.634 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.634 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.634 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.634 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.892 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.892 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.892 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:06.892 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.892 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:06.892 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:06.892 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:06.892 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:27:06.892 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:06.893 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:06.893 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:06.893 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:27:06.893 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:06.893 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:06.893 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.893 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:06.893 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:06.893 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:06.893 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.893 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:06.893 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.893 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.893 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.893 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.893 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:06.893 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:06.893 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:06.893 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.893 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.893 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:06.893 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.893 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:06.893 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:06.893 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:06.893 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:06.893 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.893 19:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.893 nvme0n1 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: ]] 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.893 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.151 nvme0n1 00:27:07.151 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.151 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: ]] 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.152 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.411 nvme0n1 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: ]] 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.411 nvme0n1 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.411 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: ]] 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.669 nvme0n1 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.669 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.928 19:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.928 nvme0n1 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: ]] 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:07.928 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:07.929 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.929 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.189 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:08.189 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.189 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:08.189 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:08.189 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:08.189 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:08.189 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.189 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.189 nvme0n1 00:27:08.189 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.189 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.189 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.189 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.189 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.189 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.189 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.189 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.189 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.189 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: ]] 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.446 nvme0n1 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.446 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: ]] 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.704 nvme0n1 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.704 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.962 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.962 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.962 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.962 19:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.962 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.962 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.962 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:08.962 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.962 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:08.962 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:08.962 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:08.962 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:27:08.962 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:27:08.962 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:08.962 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:08.962 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:27:08.962 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: ]] 00:27:08.962 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:27:08.962 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:08.962 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.962 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:08.962 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:08.962 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:08.962 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.962 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:08.963 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.963 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.963 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.963 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.963 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:08.963 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:08.963 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:08.963 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.963 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.963 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:08.963 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.963 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:08.963 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:08.963 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:08.963 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:08.963 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.963 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.221 nvme0n1 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.221 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.479 nvme0n1 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: ]] 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:09.479 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.480 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.480 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.480 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.480 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:09.480 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:09.480 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:09.480 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.480 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.480 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:09.480 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.480 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:09.480 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:09.480 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:09.480 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:09.480 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.480 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.737 nvme0n1 00:27:09.737 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.737 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.737 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.737 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.737 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.737 19:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.995 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.995 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.995 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.995 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.995 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.995 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.995 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:09.995 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.995 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:09.995 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:09.995 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:09.995 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:27:09.995 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:27:09.995 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:09.995 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:09.996 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:27:09.996 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: ]] 00:27:09.996 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:27:09.996 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:09.996 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.996 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:09.996 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:09.996 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:09.996 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.996 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:09.996 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.996 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.996 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.996 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.996 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:09.996 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:09.996 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:09.996 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.996 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.996 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:09.996 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.996 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:09.996 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:09.996 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:09.996 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:09.996 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.996 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.254 nvme0n1 00:27:10.254 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.254 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.254 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.254 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.254 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.254 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: ]] 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.512 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.770 nvme0n1 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: ]] 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.770 19:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.336 nvme0n1 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.336 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.595 nvme0n1 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzdlMDAyN2MwNDQ0ZTE0YTk3ZmQxNjc2N2IxZTZiZGG1aU0p: 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: ]] 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThjODFhYWU3YzdjM2Q0NmFlNmU0ZjU5YTdlYjgzNzMwMmE0NTIyNGU0Y2NiNjdhMjZiMzM4ZDZlZmU3M2JjYgjViEA=: 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.595 19:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.528 nvme0n1 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: ]] 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:12.528 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:12.529 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.529 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.529 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:12.529 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.529 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:12.529 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:12.529 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:12.529 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:12.529 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.529 19:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.094 nvme0n1 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: ]] 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:13.094 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:13.095 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.095 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.029 nvme0n1 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RmYjE0ZTRhNzkyZjAxMTAyZmYyMzk2YmZlMzdjYzFmMDBmNGVmZGM3MGFmMmYyHHtcxw==: 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: ]] 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTY4MTUzZTMwYTBmNWE3NTkxZWJhZTNjNzE1OWM0Yzbsx1x7: 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.029 19:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.029 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.029 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.029 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:14.029 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:14.029 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:14.029 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.029 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.029 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:14.029 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.029 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:14.029 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:14.029 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:14.029 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:14.029 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.029 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.596 nvme0n1 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThmOTg0OGRjYTc5YTg5NzhhY2M1YjA2MTRiYjljODI2MGE4OWRhY2VlMjBjNDQxNjhmOTdhMzU5Y2RhYWM1Y+pFVZA=: 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.596 19:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.161 nvme0n1 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: ]] 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:15.161 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:15.162 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:15.162 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:15.162 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:15.162 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:15.162 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:15.162 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:15.162 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:15.162 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:15.162 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:15.162 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.162 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.420 request: 00:27:15.420 { 00:27:15.420 "name": "nvme0", 00:27:15.420 "trtype": "tcp", 00:27:15.420 "traddr": "10.0.0.1", 00:27:15.420 "adrfam": "ipv4", 00:27:15.420 "trsvcid": "4420", 00:27:15.420 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:15.420 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:15.421 "prchk_reftag": false, 00:27:15.421 "prchk_guard": false, 00:27:15.421 "hdgst": false, 00:27:15.421 "ddgst": false, 00:27:15.421 "allow_unrecognized_csi": false, 00:27:15.421 "method": "bdev_nvme_attach_controller", 00:27:15.421 "req_id": 1 00:27:15.421 } 00:27:15.421 Got JSON-RPC error response 00:27:15.421 response: 00:27:15.421 { 00:27:15.421 "code": -5, 00:27:15.421 "message": "Input/output error" 00:27:15.421 } 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.421 request: 00:27:15.421 { 00:27:15.421 "name": "nvme0", 00:27:15.421 "trtype": "tcp", 00:27:15.421 "traddr": "10.0.0.1", 00:27:15.421 "adrfam": "ipv4", 00:27:15.421 "trsvcid": "4420", 00:27:15.421 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:15.421 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:15.421 "prchk_reftag": false, 00:27:15.421 "prchk_guard": false, 00:27:15.421 "hdgst": false, 00:27:15.421 "ddgst": false, 00:27:15.421 "dhchap_key": "key2", 00:27:15.421 "allow_unrecognized_csi": false, 00:27:15.421 "method": "bdev_nvme_attach_controller", 00:27:15.421 "req_id": 1 00:27:15.421 } 00:27:15.421 Got JSON-RPC error response 00:27:15.421 response: 00:27:15.421 { 00:27:15.421 "code": -5, 00:27:15.421 "message": "Input/output error" 00:27:15.421 } 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.421 request: 00:27:15.421 { 00:27:15.421 "name": "nvme0", 00:27:15.421 "trtype": "tcp", 00:27:15.421 "traddr": "10.0.0.1", 00:27:15.421 "adrfam": "ipv4", 00:27:15.421 "trsvcid": "4420", 00:27:15.421 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:15.421 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:15.421 "prchk_reftag": false, 00:27:15.421 "prchk_guard": false, 00:27:15.421 "hdgst": false, 00:27:15.421 "ddgst": false, 00:27:15.421 "dhchap_key": "key1", 00:27:15.421 "dhchap_ctrlr_key": "ckey2", 00:27:15.421 "allow_unrecognized_csi": false, 00:27:15.421 "method": "bdev_nvme_attach_controller", 00:27:15.421 "req_id": 1 00:27:15.421 } 00:27:15.421 Got JSON-RPC error response 00:27:15.421 response: 00:27:15.421 { 00:27:15.421 "code": -5, 00:27:15.421 "message": "Input/output error" 00:27:15.421 } 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.421 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.679 nvme0n1 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: ]] 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.680 request: 00:27:15.680 { 00:27:15.680 "name": "nvme0", 00:27:15.680 "dhchap_key": "key1", 00:27:15.680 "dhchap_ctrlr_key": "ckey2", 00:27:15.680 "method": "bdev_nvme_set_keys", 00:27:15.680 "req_id": 1 00:27:15.680 } 00:27:15.680 Got JSON-RPC error response 00:27:15.680 response: 00:27:15.680 { 00:27:15.680 "code": -13, 00:27:15.680 "message": "Permission denied" 00:27:15.680 } 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:15.680 19:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjNkYjBkYzQ4ZDU3YzAwOTQ4MGU4MDk0ZDM3ZTc3ZGE1ZTRjYzg1YjMxYTEwNTAzOuU+og==: 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: ]] 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzVhYTI4OTdiNWNkNDRkZTkyOWFlOWVlYzZkYzE3MDNkNWYzNDczNTE4MWM2ZDdi3aeZUg==: 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.057 19:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.057 nvme0n1 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjJhYzg2OGY5NGUxZGU0ZjFhODQ1MWMxOTcxZmE0MmUlpAuc: 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: ]] 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDBlYThiMThkNGZmN2E5NTQ3OGQzZGYyNDcxNTRhOTTGGN/q: 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.057 request: 00:27:17.057 { 00:27:17.057 "name": "nvme0", 00:27:17.057 "dhchap_key": "key2", 00:27:17.057 "dhchap_ctrlr_key": "ckey1", 00:27:17.057 "method": "bdev_nvme_set_keys", 00:27:17.057 "req_id": 1 00:27:17.057 } 00:27:17.057 Got JSON-RPC error response 00:27:17.057 response: 00:27:17.057 { 00:27:17.057 "code": -13, 00:27:17.057 "message": "Permission denied" 00:27:17.057 } 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.057 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.058 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:17.058 19:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:18.002 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.002 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:18.002 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.002 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.002 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.002 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:18.002 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:18.002 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:18.002 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:18.002 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:18.002 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:18.002 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:18.002 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:18.002 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:18.002 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:18.002 rmmod nvme_tcp 00:27:18.002 rmmod nvme_fabrics 00:27:18.259 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:18.259 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:18.259 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:18.259 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 78453 ']' 00:27:18.259 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 78453 00:27:18.259 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 78453 ']' 00:27:18.259 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 78453 00:27:18.259 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:27:18.259 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:18.259 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78453 00:27:18.259 killing process with pid 78453 00:27:18.259 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:18.259 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:18.260 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78453' 00:27:18.260 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 78453 00:27:18.260 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 78453 00:27:18.517 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:18.517 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:18.517 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:18.517 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:18.517 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:18.517 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:27:18.517 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:27:18.517 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:18.517 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:18.517 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:18.517 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:18.517 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:18.517 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:18.517 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:18.517 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:18.517 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:18.517 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:18.517 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:18.517 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:18.517 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:18.517 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:18.517 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:18.517 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:18.517 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.517 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:18.517 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.775 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:27:18.775 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:18.775 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:18.775 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:18.775 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:18.775 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:27:18.775 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:18.775 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:18.775 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:18.775 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:18.775 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:27:18.775 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:27:18.775 19:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:19.340 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:19.340 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:19.597 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:19.597 19:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.5aX /tmp/spdk.key-null.3OY /tmp/spdk.key-sha256.Cfg /tmp/spdk.key-sha384.Ba0 /tmp/spdk.key-sha512.j8V /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:27:19.597 19:29:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:19.856 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:19.856 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:19.856 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:20.113 00:27:20.113 real 0m39.006s 00:27:20.113 user 0m34.815s 00:27:20.113 sys 0m4.074s 00:27:20.113 19:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:20.113 ************************************ 00:27:20.113 END TEST nvmf_auth_host 00:27:20.113 ************************************ 00:27:20.113 19:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.113 19:29:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:20.113 19:29:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:20.113 19:29:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:20.113 19:29:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:20.113 19:29:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.113 ************************************ 00:27:20.113 START TEST nvmf_digest 00:27:20.113 ************************************ 00:27:20.113 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:20.113 * Looking for test storage... 00:27:20.113 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:20.113 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:20.113 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:27:20.113 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:20.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.372 --rc genhtml_branch_coverage=1 00:27:20.372 --rc genhtml_function_coverage=1 00:27:20.372 --rc genhtml_legend=1 00:27:20.372 --rc geninfo_all_blocks=1 00:27:20.372 --rc geninfo_unexecuted_blocks=1 00:27:20.372 00:27:20.372 ' 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:20.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.372 --rc genhtml_branch_coverage=1 00:27:20.372 --rc genhtml_function_coverage=1 00:27:20.372 --rc genhtml_legend=1 00:27:20.372 --rc geninfo_all_blocks=1 00:27:20.372 --rc geninfo_unexecuted_blocks=1 00:27:20.372 00:27:20.372 ' 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:20.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.372 --rc genhtml_branch_coverage=1 00:27:20.372 --rc genhtml_function_coverage=1 00:27:20.372 --rc genhtml_legend=1 00:27:20.372 --rc geninfo_all_blocks=1 00:27:20.372 --rc geninfo_unexecuted_blocks=1 00:27:20.372 00:27:20.372 ' 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:20.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.372 --rc genhtml_branch_coverage=1 00:27:20.372 --rc genhtml_function_coverage=1 00:27:20.372 --rc genhtml_legend=1 00:27:20.372 --rc geninfo_all_blocks=1 00:27:20.372 --rc geninfo_unexecuted_blocks=1 00:27:20.372 00:27:20.372 ' 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.372 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:20.373 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@458 -- # nvmf_veth_init 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:20.373 Cannot find device "nvmf_init_br" 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:20.373 Cannot find device "nvmf_init_br2" 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:20.373 Cannot find device "nvmf_tgt_br" 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:20.373 Cannot find device "nvmf_tgt_br2" 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:20.373 Cannot find device "nvmf_init_br" 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:20.373 Cannot find device "nvmf_init_br2" 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:20.373 Cannot find device "nvmf_tgt_br" 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:20.373 Cannot find device "nvmf_tgt_br2" 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:20.373 Cannot find device "nvmf_br" 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:20.373 Cannot find device "nvmf_init_if" 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:20.373 Cannot find device "nvmf_init_if2" 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:20.373 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:20.373 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:20.373 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:20.632 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:20.632 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:27:20.632 00:27:20.632 --- 10.0.0.3 ping statistics --- 00:27:20.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.632 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:20.632 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:20.632 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:27:20.632 00:27:20.632 --- 10.0.0.4 ping statistics --- 00:27:20.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.632 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:20.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:20.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:27:20.632 00:27:20.632 --- 10.0.0.1 ping statistics --- 00:27:20.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.632 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:20.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:20.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:27:20.632 00:27:20.632 --- 10.0.0.2 ping statistics --- 00:27:20.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.632 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # return 0 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:20.632 ************************************ 00:27:20.632 START TEST nvmf_digest_clean 00:27:20.632 ************************************ 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=80117 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 80117 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 80117 ']' 00:27:20.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:20.632 19:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:20.891 [2024-10-17 19:29:29.903355] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:27:20.891 [2024-10-17 19:29:29.903473] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:20.891 [2024-10-17 19:29:30.046083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.891 [2024-10-17 19:29:30.129750] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:20.891 [2024-10-17 19:29:30.129822] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:20.891 [2024-10-17 19:29:30.129836] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:20.891 [2024-10-17 19:29:30.129847] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:20.891 [2024-10-17 19:29:30.129856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:20.891 [2024-10-17 19:29:30.130397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.824 19:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:21.824 19:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:21.824 19:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:21.824 19:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:21.824 19:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:21.824 19:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:21.825 19:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:21.825 19:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:21.825 19:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:21.825 19:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.825 19:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:21.825 [2024-10-17 19:29:31.047622] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:22.082 null0 00:27:22.082 [2024-10-17 19:29:31.113404] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.082 [2024-10-17 19:29:31.137512] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:22.082 19:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.082 19:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:22.082 19:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:22.082 19:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:22.082 19:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:22.082 19:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:22.082 19:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:22.082 19:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:22.082 19:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80149 00:27:22.083 19:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:22.083 19:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80149 /var/tmp/bperf.sock 00:27:22.083 19:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 80149 ']' 00:27:22.083 19:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:22.083 19:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:22.083 19:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:22.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:22.083 19:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:22.083 19:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:22.083 [2024-10-17 19:29:31.202726] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:27:22.083 [2024-10-17 19:29:31.203028] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80149 ] 00:27:22.341 [2024-10-17 19:29:31.342235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.341 [2024-10-17 19:29:31.439849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.270 19:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:23.270 19:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:23.270 19:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:23.270 19:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:23.270 19:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:23.528 [2024-10-17 19:29:32.599968] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:23.528 19:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:23.528 19:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:24.093 nvme0n1 00:27:24.093 19:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:24.093 19:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:24.093 Running I/O for 2 seconds... 00:27:25.984 14605.00 IOPS, 57.05 MiB/s [2024-10-17T19:29:35.242Z] 14732.00 IOPS, 57.55 MiB/s 00:27:25.984 Latency(us) 00:27:25.984 [2024-10-17T19:29:35.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:25.984 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:25.984 nvme0n1 : 2.01 14756.14 57.64 0.00 0.00 8666.79 7983.48 19660.80 00:27:25.984 [2024-10-17T19:29:35.242Z] =================================================================================================================== 00:27:25.984 [2024-10-17T19:29:35.242Z] Total : 14756.14 57.64 0.00 0.00 8666.79 7983.48 19660.80 00:27:25.984 { 00:27:25.984 "results": [ 00:27:25.984 { 00:27:25.984 "job": "nvme0n1", 00:27:25.984 "core_mask": "0x2", 00:27:25.984 "workload": "randread", 00:27:25.984 "status": "finished", 00:27:25.984 "queue_depth": 128, 00:27:25.984 "io_size": 4096, 00:27:25.984 "runtime": 2.005403, 00:27:25.984 "iops": 14756.136297791516, 00:27:25.984 "mibps": 57.64115741324811, 00:27:25.984 "io_failed": 0, 00:27:25.984 "io_timeout": 0, 00:27:25.984 "avg_latency_us": 8666.785279559586, 00:27:25.984 "min_latency_us": 7983.476363636363, 00:27:25.984 "max_latency_us": 19660.8 00:27:25.984 } 00:27:25.984 ], 00:27:25.984 "core_count": 1 00:27:25.984 } 00:27:25.984 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:25.984 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:26.242 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:26.242 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:26.242 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:26.242 | select(.opcode=="crc32c") 00:27:26.242 | "\(.module_name) \(.executed)"' 00:27:26.499 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:26.499 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:26.499 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:26.499 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:26.499 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80149 00:27:26.499 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 80149 ']' 00:27:26.499 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 80149 00:27:26.499 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:26.499 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:26.499 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80149 00:27:26.499 killing process with pid 80149 00:27:26.499 Received shutdown signal, test time was about 2.000000 seconds 00:27:26.499 00:27:26.499 Latency(us) 00:27:26.499 [2024-10-17T19:29:35.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.499 [2024-10-17T19:29:35.757Z] =================================================================================================================== 00:27:26.499 [2024-10-17T19:29:35.757Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:26.499 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:26.499 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:26.499 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80149' 00:27:26.499 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 80149 00:27:26.499 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 80149 00:27:26.758 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:26.758 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:26.758 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:26.758 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:26.758 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:26.758 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:26.758 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:26.758 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80214 00:27:26.758 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:26.758 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80214 /var/tmp/bperf.sock 00:27:26.758 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 80214 ']' 00:27:26.758 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:26.758 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:26.758 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:26.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:26.758 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:26.759 19:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:26.759 [2024-10-17 19:29:35.948836] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:27:26.759 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:26.759 Zero copy mechanism will not be used. 00:27:26.759 [2024-10-17 19:29:35.949305] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80214 ] 00:27:27.016 [2024-10-17 19:29:36.089435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.016 [2024-10-17 19:29:36.165068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.005 19:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:28.005 19:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:28.005 19:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:28.005 19:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:28.005 19:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:28.262 [2024-10-17 19:29:37.307871] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:28.262 19:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:28.262 19:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:28.520 nvme0n1 00:27:28.520 19:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:28.520 19:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:28.778 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:28.778 Zero copy mechanism will not be used. 00:27:28.778 Running I/O for 2 seconds... 00:27:30.645 6576.00 IOPS, 822.00 MiB/s [2024-10-17T19:29:39.903Z] 6632.00 IOPS, 829.00 MiB/s 00:27:30.645 Latency(us) 00:27:30.645 [2024-10-17T19:29:39.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:30.645 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:30.645 nvme0n1 : 2.00 6630.05 828.76 0.00 0.00 2410.03 2025.66 10545.34 00:27:30.645 [2024-10-17T19:29:39.903Z] =================================================================================================================== 00:27:30.645 [2024-10-17T19:29:39.903Z] Total : 6630.05 828.76 0.00 0.00 2410.03 2025.66 10545.34 00:27:30.645 { 00:27:30.645 "results": [ 00:27:30.645 { 00:27:30.645 "job": "nvme0n1", 00:27:30.645 "core_mask": "0x2", 00:27:30.645 "workload": "randread", 00:27:30.645 "status": "finished", 00:27:30.645 "queue_depth": 16, 00:27:30.645 "io_size": 131072, 00:27:30.645 "runtime": 2.003002, 00:27:30.645 "iops": 6630.048297505445, 00:27:30.645 "mibps": 828.7560371881806, 00:27:30.645 "io_failed": 0, 00:27:30.645 "io_timeout": 0, 00:27:30.645 "avg_latency_us": 2410.031106243154, 00:27:30.645 "min_latency_us": 2025.658181818182, 00:27:30.645 "max_latency_us": 10545.338181818182 00:27:30.645 } 00:27:30.645 ], 00:27:30.645 "core_count": 1 00:27:30.645 } 00:27:30.645 19:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:30.645 19:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:30.903 19:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:30.903 19:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:30.903 | select(.opcode=="crc32c") 00:27:30.903 | "\(.module_name) \(.executed)"' 00:27:30.903 19:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:31.160 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:31.160 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:31.160 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:31.160 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:31.160 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80214 00:27:31.160 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 80214 ']' 00:27:31.160 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 80214 00:27:31.160 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:31.160 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:31.160 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80214 00:27:31.160 killing process with pid 80214 00:27:31.160 Received shutdown signal, test time was about 2.000000 seconds 00:27:31.160 00:27:31.160 Latency(us) 00:27:31.160 [2024-10-17T19:29:40.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:31.160 [2024-10-17T19:29:40.418Z] =================================================================================================================== 00:27:31.160 [2024-10-17T19:29:40.418Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:31.160 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:31.160 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:31.160 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80214' 00:27:31.160 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 80214 00:27:31.160 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 80214 00:27:31.160 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:31.160 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:31.160 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:31.160 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:31.161 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:31.161 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:31.161 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:31.161 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80271 00:27:31.161 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80271 /var/tmp/bperf.sock 00:27:31.161 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:31.161 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 80271 ']' 00:27:31.161 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:31.161 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:31.161 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:31.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:31.161 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:31.161 19:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:31.418 [2024-10-17 19:29:40.476770] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:27:31.418 [2024-10-17 19:29:40.477260] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80271 ] 00:27:31.418 [2024-10-17 19:29:40.617935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.676 [2024-10-17 19:29:40.683422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.242 19:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:32.242 19:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:32.242 19:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:32.242 19:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:32.242 19:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:32.807 [2024-10-17 19:29:41.806514] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:32.807 19:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:32.807 19:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:33.064 nvme0n1 00:27:33.064 19:29:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:33.064 19:29:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:33.064 Running I/O for 2 seconds... 00:27:35.370 14225.00 IOPS, 55.57 MiB/s [2024-10-17T19:29:44.628Z] 14605.50 IOPS, 57.05 MiB/s 00:27:35.370 Latency(us) 00:27:35.370 [2024-10-17T19:29:44.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:35.370 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:35.370 nvme0n1 : 2.01 14626.72 57.14 0.00 0.00 8741.02 4021.53 18111.77 00:27:35.370 [2024-10-17T19:29:44.628Z] =================================================================================================================== 00:27:35.370 [2024-10-17T19:29:44.628Z] Total : 14626.72 57.14 0.00 0.00 8741.02 4021.53 18111.77 00:27:35.370 { 00:27:35.370 "results": [ 00:27:35.370 { 00:27:35.370 "job": "nvme0n1", 00:27:35.370 "core_mask": "0x2", 00:27:35.370 "workload": "randwrite", 00:27:35.370 "status": "finished", 00:27:35.370 "queue_depth": 128, 00:27:35.370 "io_size": 4096, 00:27:35.370 "runtime": 2.005849, 00:27:35.370 "iops": 14626.724145237255, 00:27:35.370 "mibps": 57.135641192333026, 00:27:35.370 "io_failed": 0, 00:27:35.370 "io_timeout": 0, 00:27:35.370 "avg_latency_us": 8741.015717335596, 00:27:35.370 "min_latency_us": 4021.5272727272727, 00:27:35.370 "max_latency_us": 18111.767272727273 00:27:35.370 } 00:27:35.370 ], 00:27:35.370 "core_count": 1 00:27:35.370 } 00:27:35.370 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:35.370 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:35.370 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:35.370 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:35.370 | select(.opcode=="crc32c") 00:27:35.371 | "\(.module_name) \(.executed)"' 00:27:35.371 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:35.371 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:35.371 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:35.371 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:35.371 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:35.371 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80271 00:27:35.371 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 80271 ']' 00:27:35.371 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 80271 00:27:35.628 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:35.628 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:35.628 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80271 00:27:35.628 killing process with pid 80271 00:27:35.628 Received shutdown signal, test time was about 2.000000 seconds 00:27:35.628 00:27:35.628 Latency(us) 00:27:35.628 [2024-10-17T19:29:44.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:35.628 [2024-10-17T19:29:44.886Z] =================================================================================================================== 00:27:35.628 [2024-10-17T19:29:44.886Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:35.628 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:35.628 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:35.628 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80271' 00:27:35.628 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 80271 00:27:35.628 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 80271 00:27:35.886 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:35.886 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:35.886 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:35.886 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:35.886 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:35.886 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:35.886 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:35.886 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80337 00:27:35.886 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80337 /var/tmp/bperf.sock 00:27:35.886 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:35.886 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 80337 ']' 00:27:35.886 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:35.886 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:35.886 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:35.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:35.886 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:35.886 19:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:35.886 [2024-10-17 19:29:44.987823] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:27:35.886 [2024-10-17 19:29:44.988201] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80337 ] 00:27:35.886 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:35.886 Zero copy mechanism will not be used. 00:27:35.886 [2024-10-17 19:29:45.125166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.145 [2024-10-17 19:29:45.197912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.077 19:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:37.077 19:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:37.077 19:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:37.077 19:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:37.077 19:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:37.335 [2024-10-17 19:29:46.351070] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:37.335 19:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:37.336 19:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:37.593 nvme0n1 00:27:37.593 19:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:37.593 19:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:37.857 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:37.857 Zero copy mechanism will not be used. 00:27:37.857 Running I/O for 2 seconds... 00:27:39.745 5764.00 IOPS, 720.50 MiB/s [2024-10-17T19:29:49.003Z] 5783.00 IOPS, 722.88 MiB/s 00:27:39.745 Latency(us) 00:27:39.745 [2024-10-17T19:29:49.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:39.745 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:39.745 nvme0n1 : 2.00 5781.75 722.72 0.00 0.00 2761.55 2234.18 12928.47 00:27:39.745 [2024-10-17T19:29:49.003Z] =================================================================================================================== 00:27:39.745 [2024-10-17T19:29:49.003Z] Total : 5781.75 722.72 0.00 0.00 2761.55 2234.18 12928.47 00:27:39.745 { 00:27:39.745 "results": [ 00:27:39.745 { 00:27:39.745 "job": "nvme0n1", 00:27:39.745 "core_mask": "0x2", 00:27:39.745 "workload": "randwrite", 00:27:39.745 "status": "finished", 00:27:39.745 "queue_depth": 16, 00:27:39.745 "io_size": 131072, 00:27:39.745 "runtime": 2.003199, 00:27:39.745 "iops": 5781.752087535986, 00:27:39.745 "mibps": 722.7190109419982, 00:27:39.745 "io_failed": 0, 00:27:39.745 "io_timeout": 0, 00:27:39.745 "avg_latency_us": 2761.552508123891, 00:27:39.745 "min_latency_us": 2234.181818181818, 00:27:39.745 "max_latency_us": 12928.465454545454 00:27:39.745 } 00:27:39.745 ], 00:27:39.745 "core_count": 1 00:27:39.745 } 00:27:39.745 19:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:39.745 19:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:39.745 19:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:39.745 19:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:39.745 19:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:39.745 | select(.opcode=="crc32c") 00:27:39.745 | "\(.module_name) \(.executed)"' 00:27:40.310 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:40.310 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:40.310 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:40.310 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:40.310 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80337 00:27:40.310 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 80337 ']' 00:27:40.310 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 80337 00:27:40.310 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:40.310 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:40.310 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80337 00:27:40.310 killing process with pid 80337 00:27:40.310 Received shutdown signal, test time was about 2.000000 seconds 00:27:40.310 00:27:40.310 Latency(us) 00:27:40.310 [2024-10-17T19:29:49.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.310 [2024-10-17T19:29:49.568Z] =================================================================================================================== 00:27:40.310 [2024-10-17T19:29:49.568Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:40.310 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:40.310 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:40.310 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80337' 00:27:40.310 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 80337 00:27:40.310 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 80337 00:27:40.568 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80117 00:27:40.568 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 80117 ']' 00:27:40.568 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 80117 00:27:40.568 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:40.568 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:40.568 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80117 00:27:40.568 killing process with pid 80117 00:27:40.568 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:40.568 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:40.568 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80117' 00:27:40.568 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 80117 00:27:40.568 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 80117 00:27:40.827 00:27:40.827 real 0m20.044s 00:27:40.827 user 0m39.296s 00:27:40.827 sys 0m5.204s 00:27:40.827 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:40.827 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:40.827 ************************************ 00:27:40.827 END TEST nvmf_digest_clean 00:27:40.827 ************************************ 00:27:40.827 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:40.827 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:40.827 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:40.827 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:40.827 ************************************ 00:27:40.827 START TEST nvmf_digest_error 00:27:40.827 ************************************ 00:27:40.827 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:27:40.827 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:40.827 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:40.827 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:40.827 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:40.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.827 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=80426 00:27:40.827 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 80426 00:27:40.827 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 80426 ']' 00:27:40.827 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:40.827 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.827 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:40.827 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.827 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:40.827 19:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:40.827 [2024-10-17 19:29:50.010601] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:27:40.827 [2024-10-17 19:29:50.011003] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:41.085 [2024-10-17 19:29:50.145730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.085 [2024-10-17 19:29:50.226679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:41.085 [2024-10-17 19:29:50.227030] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:41.085 [2024-10-17 19:29:50.227222] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:41.085 [2024-10-17 19:29:50.227353] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:41.085 [2024-10-17 19:29:50.227368] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:41.085 [2024-10-17 19:29:50.227831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.085 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:41.085 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:41.085 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:41.085 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:41.085 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:41.085 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:41.085 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:41.085 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.085 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:41.085 [2024-10-17 19:29:50.340359] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:41.343 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.343 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:41.343 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:41.343 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.343 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:41.343 [2024-10-17 19:29:50.429449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:41.343 null0 00:27:41.343 [2024-10-17 19:29:50.494694] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:41.343 [2024-10-17 19:29:50.518847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:41.343 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.343 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:41.343 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:41.343 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:41.343 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:41.343 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:41.343 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80449 00:27:41.343 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:41.343 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80449 /var/tmp/bperf.sock 00:27:41.343 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 80449 ']' 00:27:41.343 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:41.343 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:41.343 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:41.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:41.343 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:41.343 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:41.343 [2024-10-17 19:29:50.576182] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:27:41.343 [2024-10-17 19:29:50.576499] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80449 ] 00:27:41.600 [2024-10-17 19:29:50.711588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.600 [2024-10-17 19:29:50.792078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:41.887 [2024-10-17 19:29:50.867862] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:41.887 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:41.887 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:41.887 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:41.887 19:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:42.145 19:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:42.145 19:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.145 19:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:42.145 19:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.145 19:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:42.145 19:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:42.710 nvme0n1 00:27:42.710 19:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:42.710 19:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.710 19:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:42.710 19:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.710 19:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:42.710 19:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:42.710 Running I/O for 2 seconds... 00:27:42.710 [2024-10-17 19:29:51.841386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:42.710 [2024-10-17 19:29:51.841718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.710 [2024-10-17 19:29:51.841740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.710 [2024-10-17 19:29:51.859411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:42.710 [2024-10-17 19:29:51.859489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.710 [2024-10-17 19:29:51.859506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.710 [2024-10-17 19:29:51.877807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:42.710 [2024-10-17 19:29:51.877885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.710 [2024-10-17 19:29:51.877901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.710 [2024-10-17 19:29:51.895371] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:42.710 [2024-10-17 19:29:51.895433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.710 [2024-10-17 19:29:51.895450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.710 [2024-10-17 19:29:51.912739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:42.710 [2024-10-17 19:29:51.913059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.710 [2024-10-17 19:29:51.913080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.710 [2024-10-17 19:29:51.930467] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:42.710 [2024-10-17 19:29:51.930519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.710 [2024-10-17 19:29:51.930534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.710 [2024-10-17 19:29:51.947821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:42.710 [2024-10-17 19:29:51.948109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.710 [2024-10-17 19:29:51.948143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.711 [2024-10-17 19:29:51.965589] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:42.711 [2024-10-17 19:29:51.965645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.711 [2024-10-17 19:29:51.965660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.967 [2024-10-17 19:29:51.983027] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:42.967 [2024-10-17 19:29:51.983328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.967 [2024-10-17 19:29:51.983348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.967 [2024-10-17 19:29:52.000863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:42.967 [2024-10-17 19:29:52.000903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.967 [2024-10-17 19:29:52.000916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.967 [2024-10-17 19:29:52.019370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:42.967 [2024-10-17 19:29:52.019416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.967 [2024-10-17 19:29:52.019431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.967 [2024-10-17 19:29:52.036726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:42.967 [2024-10-17 19:29:52.036764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.967 [2024-10-17 19:29:52.036777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.967 [2024-10-17 19:29:52.054110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:42.967 [2024-10-17 19:29:52.054164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.967 [2024-10-17 19:29:52.054181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.967 [2024-10-17 19:29:52.071750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:42.967 [2024-10-17 19:29:52.071807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.967 [2024-10-17 19:29:52.071821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.967 [2024-10-17 19:29:52.089182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:42.967 [2024-10-17 19:29:52.089228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.967 [2024-10-17 19:29:52.089243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.967 [2024-10-17 19:29:52.106516] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:42.967 [2024-10-17 19:29:52.106559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.967 [2024-10-17 19:29:52.106572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.967 [2024-10-17 19:29:52.127854] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:42.967 [2024-10-17 19:29:52.127981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.967 [2024-10-17 19:29:52.128014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.967 [2024-10-17 19:29:52.149069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:42.967 [2024-10-17 19:29:52.149168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.967 [2024-10-17 19:29:52.149189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.967 [2024-10-17 19:29:52.168928] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:42.968 [2024-10-17 19:29:52.169007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.968 [2024-10-17 19:29:52.169027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.968 [2024-10-17 19:29:52.188907] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:42.968 [2024-10-17 19:29:52.188984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.968 [2024-10-17 19:29:52.189003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.968 [2024-10-17 19:29:52.208756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:42.968 [2024-10-17 19:29:52.208830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.968 [2024-10-17 19:29:52.208848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.224 [2024-10-17 19:29:52.228482] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.224 [2024-10-17 19:29:52.228561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.224 [2024-10-17 19:29:52.228586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.224 [2024-10-17 19:29:52.248221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.224 [2024-10-17 19:29:52.248290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.225 [2024-10-17 19:29:52.248308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.225 [2024-10-17 19:29:52.267863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.225 [2024-10-17 19:29:52.267915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.225 [2024-10-17 19:29:52.267932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.225 [2024-10-17 19:29:52.287371] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.225 [2024-10-17 19:29:52.287411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.225 [2024-10-17 19:29:52.287427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.225 [2024-10-17 19:29:52.306924] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.225 [2024-10-17 19:29:52.306965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.225 [2024-10-17 19:29:52.306981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.225 [2024-10-17 19:29:52.326360] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.225 [2024-10-17 19:29:52.326403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.225 [2024-10-17 19:29:52.326419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.225 [2024-10-17 19:29:52.346007] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.225 [2024-10-17 19:29:52.346057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.225 [2024-10-17 19:29:52.346072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.225 [2024-10-17 19:29:52.365440] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.225 [2024-10-17 19:29:52.365479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.225 [2024-10-17 19:29:52.365495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.225 [2024-10-17 19:29:52.384989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.225 [2024-10-17 19:29:52.385030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.225 [2024-10-17 19:29:52.385045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.225 [2024-10-17 19:29:52.404654] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.225 [2024-10-17 19:29:52.404700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.225 [2024-10-17 19:29:52.404716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.225 [2024-10-17 19:29:52.424438] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.225 [2024-10-17 19:29:52.424510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.225 [2024-10-17 19:29:52.424528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.225 [2024-10-17 19:29:52.443909] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.225 [2024-10-17 19:29:52.443980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.225 [2024-10-17 19:29:52.443994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.225 [2024-10-17 19:29:52.461179] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.225 [2024-10-17 19:29:52.461233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.225 [2024-10-17 19:29:52.461247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.225 [2024-10-17 19:29:52.478532] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.225 [2024-10-17 19:29:52.478597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.225 [2024-10-17 19:29:52.478610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.481 [2024-10-17 19:29:52.496261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.481 [2024-10-17 19:29:52.496309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.481 [2024-10-17 19:29:52.496322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.481 [2024-10-17 19:29:52.513938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.481 [2024-10-17 19:29:52.513973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.481 [2024-10-17 19:29:52.513986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.481 [2024-10-17 19:29:52.531637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.481 [2024-10-17 19:29:52.531684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.481 [2024-10-17 19:29:52.531696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.481 [2024-10-17 19:29:52.549241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.481 [2024-10-17 19:29:52.549275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.481 [2024-10-17 19:29:52.549287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.481 [2024-10-17 19:29:52.566718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.481 [2024-10-17 19:29:52.566752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.481 [2024-10-17 19:29:52.566764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.481 [2024-10-17 19:29:52.584226] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.481 [2024-10-17 19:29:52.584263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.481 [2024-10-17 19:29:52.584276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.481 [2024-10-17 19:29:52.601844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.481 [2024-10-17 19:29:52.601881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.481 [2024-10-17 19:29:52.601895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.481 [2024-10-17 19:29:52.619499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.481 [2024-10-17 19:29:52.619537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.481 [2024-10-17 19:29:52.619551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.481 [2024-10-17 19:29:52.637145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.481 [2024-10-17 19:29:52.637178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.481 [2024-10-17 19:29:52.637192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.481 [2024-10-17 19:29:52.654681] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.481 [2024-10-17 19:29:52.654714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.481 [2024-10-17 19:29:52.654727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.481 [2024-10-17 19:29:52.672226] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.481 [2024-10-17 19:29:52.672262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.481 [2024-10-17 19:29:52.672276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.481 [2024-10-17 19:29:52.689687] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.481 [2024-10-17 19:29:52.689721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.481 [2024-10-17 19:29:52.689733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.481 [2024-10-17 19:29:52.707247] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.481 [2024-10-17 19:29:52.707283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.481 [2024-10-17 19:29:52.707296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.481 [2024-10-17 19:29:52.724795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.481 [2024-10-17 19:29:52.724838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.481 [2024-10-17 19:29:52.724852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.738 [2024-10-17 19:29:52.742562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.738 [2024-10-17 19:29:52.742620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.738 [2024-10-17 19:29:52.742634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.738 [2024-10-17 19:29:52.760645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.738 [2024-10-17 19:29:52.760702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.738 [2024-10-17 19:29:52.760717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.738 [2024-10-17 19:29:52.778445] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.738 [2024-10-17 19:29:52.778484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.738 [2024-10-17 19:29:52.778497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.738 [2024-10-17 19:29:52.796174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.738 [2024-10-17 19:29:52.796209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.738 [2024-10-17 19:29:52.796223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.738 13663.00 IOPS, 53.37 MiB/s [2024-10-17T19:29:52.996Z] [2024-10-17 19:29:52.813749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.738 [2024-10-17 19:29:52.813789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.738 [2024-10-17 19:29:52.813803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.738 [2024-10-17 19:29:52.831318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.738 [2024-10-17 19:29:52.831354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.738 [2024-10-17 19:29:52.831367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.738 [2024-10-17 19:29:52.848827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.738 [2024-10-17 19:29:52.848865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.738 [2024-10-17 19:29:52.848879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.738 [2024-10-17 19:29:52.866334] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.738 [2024-10-17 19:29:52.866369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.738 [2024-10-17 19:29:52.866383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.738 [2024-10-17 19:29:52.884180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.738 [2024-10-17 19:29:52.884219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.738 [2024-10-17 19:29:52.884232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.738 [2024-10-17 19:29:52.904389] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.738 [2024-10-17 19:29:52.904424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.738 [2024-10-17 19:29:52.904436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.738 [2024-10-17 19:29:52.922505] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.738 [2024-10-17 19:29:52.922541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.738 [2024-10-17 19:29:52.922554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.738 [2024-10-17 19:29:52.940529] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.738 [2024-10-17 19:29:52.940565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.738 [2024-10-17 19:29:52.940578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.738 [2024-10-17 19:29:52.958201] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.738 [2024-10-17 19:29:52.958236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.738 [2024-10-17 19:29:52.958249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.738 [2024-10-17 19:29:52.975954] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.739 [2024-10-17 19:29:52.975985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.739 [2024-10-17 19:29:52.975997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.995 [2024-10-17 19:29:53.001641] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.995 [2024-10-17 19:29:53.001679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.995 [2024-10-17 19:29:53.001693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.995 [2024-10-17 19:29:53.019490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.995 [2024-10-17 19:29:53.019535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.995 [2024-10-17 19:29:53.019550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.995 [2024-10-17 19:29:53.037299] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.995 [2024-10-17 19:29:53.037336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.995 [2024-10-17 19:29:53.037349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.995 [2024-10-17 19:29:53.054759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.995 [2024-10-17 19:29:53.054795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.995 [2024-10-17 19:29:53.054808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.996 [2024-10-17 19:29:53.072300] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.996 [2024-10-17 19:29:53.072336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.996 [2024-10-17 19:29:53.072350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.996 [2024-10-17 19:29:53.089842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.996 [2024-10-17 19:29:53.089877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.996 [2024-10-17 19:29:53.089890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.996 [2024-10-17 19:29:53.107614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.996 [2024-10-17 19:29:53.107650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.996 [2024-10-17 19:29:53.107664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.996 [2024-10-17 19:29:53.125499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.996 [2024-10-17 19:29:53.125540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.996 [2024-10-17 19:29:53.125555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.996 [2024-10-17 19:29:53.143486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.996 [2024-10-17 19:29:53.143530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.996 [2024-10-17 19:29:53.143544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.996 [2024-10-17 19:29:53.161367] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.996 [2024-10-17 19:29:53.161404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.996 [2024-10-17 19:29:53.161417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.996 [2024-10-17 19:29:53.179262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.996 [2024-10-17 19:29:53.179299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.996 [2024-10-17 19:29:53.179312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.996 [2024-10-17 19:29:53.197033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.996 [2024-10-17 19:29:53.197075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.996 [2024-10-17 19:29:53.197088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.996 [2024-10-17 19:29:53.214879] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.996 [2024-10-17 19:29:53.214917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.996 [2024-10-17 19:29:53.214931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.996 [2024-10-17 19:29:53.232757] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.996 [2024-10-17 19:29:53.232800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.996 [2024-10-17 19:29:53.232814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.996 [2024-10-17 19:29:53.250859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:43.996 [2024-10-17 19:29:53.250924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.996 [2024-10-17 19:29:53.250944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.253 [2024-10-17 19:29:53.268971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.253 [2024-10-17 19:29:53.269011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.253 [2024-10-17 19:29:53.269025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.253 [2024-10-17 19:29:53.286902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.253 [2024-10-17 19:29:53.286938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.253 [2024-10-17 19:29:53.286950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.253 [2024-10-17 19:29:53.304460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.253 [2024-10-17 19:29:53.304499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.253 [2024-10-17 19:29:53.304513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.253 [2024-10-17 19:29:53.322055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.253 [2024-10-17 19:29:53.322092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.253 [2024-10-17 19:29:53.322106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.253 [2024-10-17 19:29:53.339478] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.253 [2024-10-17 19:29:53.339513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.253 [2024-10-17 19:29:53.339525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.253 [2024-10-17 19:29:53.356969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.253 [2024-10-17 19:29:53.357003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.253 [2024-10-17 19:29:53.357015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.253 [2024-10-17 19:29:53.374454] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.253 [2024-10-17 19:29:53.374488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.253 [2024-10-17 19:29:53.374500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.253 [2024-10-17 19:29:53.391896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.253 [2024-10-17 19:29:53.391930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.253 [2024-10-17 19:29:53.391942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.253 [2024-10-17 19:29:53.409379] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.253 [2024-10-17 19:29:53.409418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.253 [2024-10-17 19:29:53.409431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.253 [2024-10-17 19:29:53.426849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.253 [2024-10-17 19:29:53.426883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.253 [2024-10-17 19:29:53.426896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.253 [2024-10-17 19:29:53.444541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.253 [2024-10-17 19:29:53.444580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.253 [2024-10-17 19:29:53.444594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.253 [2024-10-17 19:29:53.462185] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.253 [2024-10-17 19:29:53.462220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.253 [2024-10-17 19:29:53.462233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.253 [2024-10-17 19:29:53.479715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.253 [2024-10-17 19:29:53.479750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.253 [2024-10-17 19:29:53.479763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.253 [2024-10-17 19:29:53.497144] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.253 [2024-10-17 19:29:53.497177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.253 [2024-10-17 19:29:53.497190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.511 [2024-10-17 19:29:53.514679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.511 [2024-10-17 19:29:53.514720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.511 [2024-10-17 19:29:53.514735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.511 [2024-10-17 19:29:53.532193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.511 [2024-10-17 19:29:53.532230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.511 [2024-10-17 19:29:53.532243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.511 [2024-10-17 19:29:53.549883] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.511 [2024-10-17 19:29:53.549932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.511 [2024-10-17 19:29:53.549946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.511 [2024-10-17 19:29:53.567515] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.511 [2024-10-17 19:29:53.567557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.511 [2024-10-17 19:29:53.567571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.511 [2024-10-17 19:29:53.585007] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.511 [2024-10-17 19:29:53.585043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.511 [2024-10-17 19:29:53.585056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.511 [2024-10-17 19:29:53.602533] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.511 [2024-10-17 19:29:53.602568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.511 [2024-10-17 19:29:53.602582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.511 [2024-10-17 19:29:53.619984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.511 [2024-10-17 19:29:53.620018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.511 [2024-10-17 19:29:53.620031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.511 [2024-10-17 19:29:53.637460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.511 [2024-10-17 19:29:53.637495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.511 [2024-10-17 19:29:53.637508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.511 [2024-10-17 19:29:53.654993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.511 [2024-10-17 19:29:53.655038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.511 [2024-10-17 19:29:53.655052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.511 [2024-10-17 19:29:53.672758] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.511 [2024-10-17 19:29:53.672797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.511 [2024-10-17 19:29:53.672813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.511 [2024-10-17 19:29:53.690523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.511 [2024-10-17 19:29:53.690572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.511 [2024-10-17 19:29:53.690586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.511 [2024-10-17 19:29:53.708036] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.511 [2024-10-17 19:29:53.708089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.511 [2024-10-17 19:29:53.708102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.511 [2024-10-17 19:29:53.725583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.511 [2024-10-17 19:29:53.725630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.511 [2024-10-17 19:29:53.725645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.511 [2024-10-17 19:29:53.743194] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.512 [2024-10-17 19:29:53.743240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.512 [2024-10-17 19:29:53.743253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.512 [2024-10-17 19:29:53.760698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.512 [2024-10-17 19:29:53.760746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.512 [2024-10-17 19:29:53.760760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.770 [2024-10-17 19:29:53.778474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.770 [2024-10-17 19:29:53.778522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.770 [2024-10-17 19:29:53.778536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.770 [2024-10-17 19:29:53.796100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.770 [2024-10-17 19:29:53.796150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.770 [2024-10-17 19:29:53.796165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.770 13979.00 IOPS, 54.61 MiB/s [2024-10-17T19:29:54.028Z] [2024-10-17 19:29:53.813365] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21012b0) 00:27:44.770 [2024-10-17 19:29:53.813405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.771 [2024-10-17 19:29:53.813418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.771 00:27:44.771 Latency(us) 00:27:44.771 [2024-10-17T19:29:54.029Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.771 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:44.771 nvme0n1 : 2.01 14000.23 54.69 0.00 0.00 9135.46 8340.95 34555.35 00:27:44.771 [2024-10-17T19:29:54.029Z] =================================================================================================================== 00:27:44.771 [2024-10-17T19:29:54.029Z] Total : 14000.23 54.69 0.00 0.00 9135.46 8340.95 34555.35 00:27:44.771 { 00:27:44.771 "results": [ 00:27:44.771 { 00:27:44.771 "job": "nvme0n1", 00:27:44.771 "core_mask": "0x2", 00:27:44.771 "workload": "randread", 00:27:44.771 "status": "finished", 00:27:44.771 "queue_depth": 128, 00:27:44.771 "io_size": 4096, 00:27:44.771 "runtime": 2.00611, 00:27:44.771 "iops": 14000.229299490058, 00:27:44.771 "mibps": 54.68839570113304, 00:27:44.771 "io_failed": 0, 00:27:44.771 "io_timeout": 0, 00:27:44.771 "avg_latency_us": 9135.462277032231, 00:27:44.771 "min_latency_us": 8340.945454545454, 00:27:44.771 "max_latency_us": 34555.34545454545 00:27:44.771 } 00:27:44.771 ], 00:27:44.771 "core_count": 1 00:27:44.771 } 00:27:44.771 19:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:44.771 19:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:44.771 19:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:44.771 | .driver_specific 00:27:44.771 | .nvme_error 00:27:44.771 | .status_code 00:27:44.771 | .command_transient_transport_error' 00:27:44.771 19:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:45.029 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 110 > 0 )) 00:27:45.029 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80449 00:27:45.029 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 80449 ']' 00:27:45.029 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 80449 00:27:45.029 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:45.029 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:45.029 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80449 00:27:45.029 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:45.029 killing process with pid 80449 00:27:45.029 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:45.029 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80449' 00:27:45.029 Received shutdown signal, test time was about 2.000000 seconds 00:27:45.029 00:27:45.029 Latency(us) 00:27:45.029 [2024-10-17T19:29:54.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:45.029 [2024-10-17T19:29:54.287Z] =================================================================================================================== 00:27:45.029 [2024-10-17T19:29:54.287Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:45.029 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 80449 00:27:45.029 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 80449 00:27:45.287 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:45.287 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:45.287 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:45.287 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:45.287 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:45.287 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80502 00:27:45.287 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80502 /var/tmp/bperf.sock 00:27:45.287 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 80502 ']' 00:27:45.287 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:45.287 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:45.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:45.287 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:45.287 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:45.287 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:45.287 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:45.287 [2024-10-17 19:29:54.446092] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:27:45.287 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:45.287 Zero copy mechanism will not be used. 00:27:45.287 [2024-10-17 19:29:54.446239] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80502 ] 00:27:45.545 [2024-10-17 19:29:54.587536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.545 [2024-10-17 19:29:54.650722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.545 [2024-10-17 19:29:54.708238] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:45.545 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:45.545 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:45.545 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:45.545 19:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:46.109 19:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:46.109 19:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.109 19:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:46.109 19:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.109 19:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:46.109 19:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:46.368 nvme0n1 00:27:46.368 19:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:46.368 19:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.368 19:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:46.368 19:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.368 19:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:46.368 19:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:46.368 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:46.368 Zero copy mechanism will not be used. 00:27:46.368 Running I/O for 2 seconds... 00:27:46.368 [2024-10-17 19:29:55.611566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.368 [2024-10-17 19:29:55.611635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.368 [2024-10-17 19:29:55.611653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.368 [2024-10-17 19:29:55.616549] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.368 [2024-10-17 19:29:55.616586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.368 [2024-10-17 19:29:55.616599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.368 [2024-10-17 19:29:55.621420] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.368 [2024-10-17 19:29:55.621458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.368 [2024-10-17 19:29:55.621472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.628 [2024-10-17 19:29:55.626380] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.628 [2024-10-17 19:29:55.626422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.628 [2024-10-17 19:29:55.626436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.628 [2024-10-17 19:29:55.631375] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.628 [2024-10-17 19:29:55.631415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.628 [2024-10-17 19:29:55.631428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.628 [2024-10-17 19:29:55.636245] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.628 [2024-10-17 19:29:55.636280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.628 [2024-10-17 19:29:55.636294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.628 [2024-10-17 19:29:55.641148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.628 [2024-10-17 19:29:55.641185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.628 [2024-10-17 19:29:55.641197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.628 [2024-10-17 19:29:55.645970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.628 [2024-10-17 19:29:55.646016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.628 [2024-10-17 19:29:55.646031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.628 [2024-10-17 19:29:55.650890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.628 [2024-10-17 19:29:55.650928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.628 [2024-10-17 19:29:55.650941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.628 [2024-10-17 19:29:55.655828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.628 [2024-10-17 19:29:55.655864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.628 [2024-10-17 19:29:55.655877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.628 [2024-10-17 19:29:55.660879] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.628 [2024-10-17 19:29:55.660919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.628 [2024-10-17 19:29:55.660933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.628 [2024-10-17 19:29:55.665832] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.628 [2024-10-17 19:29:55.665870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.628 [2024-10-17 19:29:55.665884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.628 [2024-10-17 19:29:55.670812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.628 [2024-10-17 19:29:55.670851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.628 [2024-10-17 19:29:55.670864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.628 [2024-10-17 19:29:55.675831] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.628 [2024-10-17 19:29:55.675870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.628 [2024-10-17 19:29:55.675883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.628 [2024-10-17 19:29:55.680774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.628 [2024-10-17 19:29:55.680816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.628 [2024-10-17 19:29:55.680830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.628 [2024-10-17 19:29:55.685680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.628 [2024-10-17 19:29:55.685716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.628 [2024-10-17 19:29:55.685730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.628 [2024-10-17 19:29:55.690644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.628 [2024-10-17 19:29:55.690681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.628 [2024-10-17 19:29:55.690694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.628 [2024-10-17 19:29:55.695581] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.628 [2024-10-17 19:29:55.695617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.628 [2024-10-17 19:29:55.695630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.628 [2024-10-17 19:29:55.700535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.628 [2024-10-17 19:29:55.700575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.628 [2024-10-17 19:29:55.700589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.628 [2024-10-17 19:29:55.705418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.628 [2024-10-17 19:29:55.705453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.628 [2024-10-17 19:29:55.705466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.628 [2024-10-17 19:29:55.710344] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.628 [2024-10-17 19:29:55.710379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.628 [2024-10-17 19:29:55.710392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.628 [2024-10-17 19:29:55.715226] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.628 [2024-10-17 19:29:55.715259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.628 [2024-10-17 19:29:55.715272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.628 [2024-10-17 19:29:55.720078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.628 [2024-10-17 19:29:55.720115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.629 [2024-10-17 19:29:55.720141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.629 [2024-10-17 19:29:55.724986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.629 [2024-10-17 19:29:55.725020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.629 [2024-10-17 19:29:55.725033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.629 [2024-10-17 19:29:55.729954] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.629 [2024-10-17 19:29:55.729990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.629 [2024-10-17 19:29:55.730011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.629 [2024-10-17 19:29:55.734914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.629 [2024-10-17 19:29:55.734950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.629 [2024-10-17 19:29:55.734963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.629 [2024-10-17 19:29:55.739847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.629 [2024-10-17 19:29:55.739884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.629 [2024-10-17 19:29:55.739898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.629 [2024-10-17 19:29:55.744799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.629 [2024-10-17 19:29:55.744836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.629 [2024-10-17 19:29:55.744849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.629 [2024-10-17 19:29:55.749820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.629 [2024-10-17 19:29:55.749857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.629 [2024-10-17 19:29:55.749871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.629 [2024-10-17 19:29:55.754873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.629 [2024-10-17 19:29:55.754908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.629 [2024-10-17 19:29:55.754921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.629 [2024-10-17 19:29:55.759729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.629 [2024-10-17 19:29:55.759765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.629 [2024-10-17 19:29:55.759778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.629 [2024-10-17 19:29:55.764642] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.629 [2024-10-17 19:29:55.764677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.629 [2024-10-17 19:29:55.764691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.629 [2024-10-17 19:29:55.769605] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.629 [2024-10-17 19:29:55.769641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.629 [2024-10-17 19:29:55.769654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.629 [2024-10-17 19:29:55.774532] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.629 [2024-10-17 19:29:55.774567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.629 [2024-10-17 19:29:55.774580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.629 [2024-10-17 19:29:55.779458] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.629 [2024-10-17 19:29:55.779492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.629 [2024-10-17 19:29:55.779504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.629 [2024-10-17 19:29:55.784296] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.629 [2024-10-17 19:29:55.784330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.629 [2024-10-17 19:29:55.784343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.629 [2024-10-17 19:29:55.789155] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.629 [2024-10-17 19:29:55.789189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.629 [2024-10-17 19:29:55.789201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.629 [2024-10-17 19:29:55.793986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.629 [2024-10-17 19:29:55.794028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.629 [2024-10-17 19:29:55.794040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.629 [2024-10-17 19:29:55.798874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.629 [2024-10-17 19:29:55.798908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.629 [2024-10-17 19:29:55.798921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.629 [2024-10-17 19:29:55.803699] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.629 [2024-10-17 19:29:55.803733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.629 [2024-10-17 19:29:55.803746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.629 [2024-10-17 19:29:55.808527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.629 [2024-10-17 19:29:55.808562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.629 [2024-10-17 19:29:55.808574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.629 [2024-10-17 19:29:55.813361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.629 [2024-10-17 19:29:55.813395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.629 [2024-10-17 19:29:55.813407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.629 [2024-10-17 19:29:55.818203] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.629 [2024-10-17 19:29:55.818235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.629 [2024-10-17 19:29:55.818248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.629 [2024-10-17 19:29:55.823051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.629 [2024-10-17 19:29:55.823084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.629 [2024-10-17 19:29:55.823097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.629 [2024-10-17 19:29:55.827960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.630 [2024-10-17 19:29:55.827995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.630 [2024-10-17 19:29:55.828008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.630 [2024-10-17 19:29:55.832807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.630 [2024-10-17 19:29:55.832845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.630 [2024-10-17 19:29:55.832858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.630 [2024-10-17 19:29:55.837698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.630 [2024-10-17 19:29:55.837733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.630 [2024-10-17 19:29:55.837746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.630 [2024-10-17 19:29:55.842740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.630 [2024-10-17 19:29:55.842777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.630 [2024-10-17 19:29:55.842790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.630 [2024-10-17 19:29:55.847681] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.630 [2024-10-17 19:29:55.847716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.630 [2024-10-17 19:29:55.847728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.630 [2024-10-17 19:29:55.852550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.630 [2024-10-17 19:29:55.852586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.630 [2024-10-17 19:29:55.852599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.630 [2024-10-17 19:29:55.857441] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.630 [2024-10-17 19:29:55.857475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.630 [2024-10-17 19:29:55.857487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.630 [2024-10-17 19:29:55.862334] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.630 [2024-10-17 19:29:55.862368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.630 [2024-10-17 19:29:55.862381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.630 [2024-10-17 19:29:55.867212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.630 [2024-10-17 19:29:55.867244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.630 [2024-10-17 19:29:55.867257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.630 [2024-10-17 19:29:55.872053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.630 [2024-10-17 19:29:55.872086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.630 [2024-10-17 19:29:55.872099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.630 [2024-10-17 19:29:55.876871] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.630 [2024-10-17 19:29:55.876905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.630 [2024-10-17 19:29:55.876917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.630 [2024-10-17 19:29:55.881805] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.630 [2024-10-17 19:29:55.881840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.630 [2024-10-17 19:29:55.881853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.889 [2024-10-17 19:29:55.886680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.889 [2024-10-17 19:29:55.886716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.889 [2024-10-17 19:29:55.886729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.889 [2024-10-17 19:29:55.891555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.889 [2024-10-17 19:29:55.891589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.889 [2024-10-17 19:29:55.891602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.889 [2024-10-17 19:29:55.896433] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.889 [2024-10-17 19:29:55.896467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.889 [2024-10-17 19:29:55.896479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.889 [2024-10-17 19:29:55.901431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.889 [2024-10-17 19:29:55.901464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.889 [2024-10-17 19:29:55.901477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.889 [2024-10-17 19:29:55.906400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.889 [2024-10-17 19:29:55.906434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.890 [2024-10-17 19:29:55.906447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.890 [2024-10-17 19:29:55.911325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.890 [2024-10-17 19:29:55.911357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.890 [2024-10-17 19:29:55.911370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.890 [2024-10-17 19:29:55.916228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.890 [2024-10-17 19:29:55.916263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.890 [2024-10-17 19:29:55.916288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.890 [2024-10-17 19:29:55.921177] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.890 [2024-10-17 19:29:55.921212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.890 [2024-10-17 19:29:55.921225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.890 [2024-10-17 19:29:55.926058] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.890 [2024-10-17 19:29:55.926092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.890 [2024-10-17 19:29:55.926106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.890 [2024-10-17 19:29:55.930948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.890 [2024-10-17 19:29:55.930982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.890 [2024-10-17 19:29:55.930995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.890 [2024-10-17 19:29:55.935840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.890 [2024-10-17 19:29:55.935874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.890 [2024-10-17 19:29:55.935886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.890 [2024-10-17 19:29:55.940690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.890 [2024-10-17 19:29:55.940725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.890 [2024-10-17 19:29:55.940739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.890 [2024-10-17 19:29:55.945585] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.890 [2024-10-17 19:29:55.945620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.890 [2024-10-17 19:29:55.945633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.890 [2024-10-17 19:29:55.950469] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.890 [2024-10-17 19:29:55.950503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.890 [2024-10-17 19:29:55.950516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.890 [2024-10-17 19:29:55.955432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.890 [2024-10-17 19:29:55.955469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.890 [2024-10-17 19:29:55.955483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.890 [2024-10-17 19:29:55.960448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.890 [2024-10-17 19:29:55.960483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.890 [2024-10-17 19:29:55.960497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.890 [2024-10-17 19:29:55.965313] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.890 [2024-10-17 19:29:55.965346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.890 [2024-10-17 19:29:55.965358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.890 [2024-10-17 19:29:55.970229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.890 [2024-10-17 19:29:55.970262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.890 [2024-10-17 19:29:55.970275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.890 [2024-10-17 19:29:55.975123] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.890 [2024-10-17 19:29:55.975170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.890 [2024-10-17 19:29:55.975183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.890 [2024-10-17 19:29:55.980032] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.890 [2024-10-17 19:29:55.980066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.890 [2024-10-17 19:29:55.980079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.890 [2024-10-17 19:29:55.984959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.890 [2024-10-17 19:29:55.984992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.890 [2024-10-17 19:29:55.985005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.890 [2024-10-17 19:29:55.989843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.890 [2024-10-17 19:29:55.989878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.890 [2024-10-17 19:29:55.989890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.890 [2024-10-17 19:29:55.994725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.890 [2024-10-17 19:29:55.994760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.890 [2024-10-17 19:29:55.994772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.890 [2024-10-17 19:29:55.999609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.890 [2024-10-17 19:29:55.999643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.890 [2024-10-17 19:29:55.999655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.890 [2024-10-17 19:29:56.004464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.890 [2024-10-17 19:29:56.004498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.890 [2024-10-17 19:29:56.004510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.890 [2024-10-17 19:29:56.009273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.890 [2024-10-17 19:29:56.009306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.890 [2024-10-17 19:29:56.009319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.890 [2024-10-17 19:29:56.014162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.890 [2024-10-17 19:29:56.014194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.890 [2024-10-17 19:29:56.014206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.890 [2024-10-17 19:29:56.019073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.890 [2024-10-17 19:29:56.019106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.890 [2024-10-17 19:29:56.019119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.890 [2024-10-17 19:29:56.023996] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.890 [2024-10-17 19:29:56.024030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.890 [2024-10-17 19:29:56.024042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.890 [2024-10-17 19:29:56.028831] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.890 [2024-10-17 19:29:56.028864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.890 [2024-10-17 19:29:56.028876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.890 [2024-10-17 19:29:56.033666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.890 [2024-10-17 19:29:56.033700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.890 [2024-10-17 19:29:56.033713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.890 [2024-10-17 19:29:56.038535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.891 [2024-10-17 19:29:56.038568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.891 [2024-10-17 19:29:56.038581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.891 [2024-10-17 19:29:56.043463] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.891 [2024-10-17 19:29:56.043497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.891 [2024-10-17 19:29:56.043509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.891 [2024-10-17 19:29:56.048308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.891 [2024-10-17 19:29:56.048341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.891 [2024-10-17 19:29:56.048353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.891 [2024-10-17 19:29:56.053161] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.891 [2024-10-17 19:29:56.053194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.891 [2024-10-17 19:29:56.053206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.891 [2024-10-17 19:29:56.057954] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.891 [2024-10-17 19:29:56.057988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.891 [2024-10-17 19:29:56.058009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.891 [2024-10-17 19:29:56.062768] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.891 [2024-10-17 19:29:56.062802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.891 [2024-10-17 19:29:56.062815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.891 [2024-10-17 19:29:56.067614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.891 [2024-10-17 19:29:56.067647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.891 [2024-10-17 19:29:56.067659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.891 [2024-10-17 19:29:56.072468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.891 [2024-10-17 19:29:56.072503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.891 [2024-10-17 19:29:56.072515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.891 [2024-10-17 19:29:56.077347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.891 [2024-10-17 19:29:56.077380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.891 [2024-10-17 19:29:56.077393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.891 [2024-10-17 19:29:56.082226] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.891 [2024-10-17 19:29:56.082258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.891 [2024-10-17 19:29:56.082270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.891 [2024-10-17 19:29:56.087001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.891 [2024-10-17 19:29:56.087035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.891 [2024-10-17 19:29:56.087047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.891 [2024-10-17 19:29:56.091870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.891 [2024-10-17 19:29:56.091905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.891 [2024-10-17 19:29:56.091917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.891 [2024-10-17 19:29:56.096794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.891 [2024-10-17 19:29:56.096828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.891 [2024-10-17 19:29:56.096841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.891 [2024-10-17 19:29:56.101670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.891 [2024-10-17 19:29:56.101706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.891 [2024-10-17 19:29:56.101719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.891 [2024-10-17 19:29:56.106549] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.891 [2024-10-17 19:29:56.106584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.891 [2024-10-17 19:29:56.106597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.891 [2024-10-17 19:29:56.111404] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.891 [2024-10-17 19:29:56.111438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.891 [2024-10-17 19:29:56.111450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.891 [2024-10-17 19:29:56.116274] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.891 [2024-10-17 19:29:56.116307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.891 [2024-10-17 19:29:56.116320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.891 [2024-10-17 19:29:56.121114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.891 [2024-10-17 19:29:56.121159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.891 [2024-10-17 19:29:56.121172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.891 [2024-10-17 19:29:56.125956] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.891 [2024-10-17 19:29:56.125989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.891 [2024-10-17 19:29:56.126011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.891 [2024-10-17 19:29:56.130859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.891 [2024-10-17 19:29:56.130893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.891 [2024-10-17 19:29:56.130905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.891 [2024-10-17 19:29:56.135675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.891 [2024-10-17 19:29:56.135709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.891 [2024-10-17 19:29:56.135721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.891 [2024-10-17 19:29:56.140556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.891 [2024-10-17 19:29:56.140594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.891 [2024-10-17 19:29:56.140607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.891 [2024-10-17 19:29:56.145560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:46.891 [2024-10-17 19:29:56.145594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.891 [2024-10-17 19:29:56.145607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.150 [2024-10-17 19:29:56.150481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.150 [2024-10-17 19:29:56.150516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.150 [2024-10-17 19:29:56.150529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.150 [2024-10-17 19:29:56.155365] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.150 [2024-10-17 19:29:56.155399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.150 [2024-10-17 19:29:56.155411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.150 [2024-10-17 19:29:56.160253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.150 [2024-10-17 19:29:56.160286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.150 [2024-10-17 19:29:56.160298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.150 [2024-10-17 19:29:56.165188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.150 [2024-10-17 19:29:56.165221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.150 [2024-10-17 19:29:56.165233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.150 [2024-10-17 19:29:56.170076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.150 [2024-10-17 19:29:56.170109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.150 [2024-10-17 19:29:56.170121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.150 [2024-10-17 19:29:56.174957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.150 [2024-10-17 19:29:56.174991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.150 [2024-10-17 19:29:56.175004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.150 [2024-10-17 19:29:56.179798] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.150 [2024-10-17 19:29:56.179831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.150 [2024-10-17 19:29:56.179843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.150 [2024-10-17 19:29:56.184632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.150 [2024-10-17 19:29:56.184665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.150 [2024-10-17 19:29:56.184678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.150 [2024-10-17 19:29:56.189466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.150 [2024-10-17 19:29:56.189500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.150 [2024-10-17 19:29:56.189512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.150 [2024-10-17 19:29:56.194395] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.151 [2024-10-17 19:29:56.194429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.151 [2024-10-17 19:29:56.194442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.151 [2024-10-17 19:29:56.199313] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.151 [2024-10-17 19:29:56.199346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.151 [2024-10-17 19:29:56.199359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.151 [2024-10-17 19:29:56.204239] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.151 [2024-10-17 19:29:56.204273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.151 [2024-10-17 19:29:56.204286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.151 [2024-10-17 19:29:56.209060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.151 [2024-10-17 19:29:56.209094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.151 [2024-10-17 19:29:56.209107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.151 [2024-10-17 19:29:56.213973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.151 [2024-10-17 19:29:56.214015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.151 [2024-10-17 19:29:56.214028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.151 [2024-10-17 19:29:56.218842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.151 [2024-10-17 19:29:56.218878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.151 [2024-10-17 19:29:56.218890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.151 [2024-10-17 19:29:56.223751] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.151 [2024-10-17 19:29:56.223790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.151 [2024-10-17 19:29:56.223802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.151 [2024-10-17 19:29:56.228655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.151 [2024-10-17 19:29:56.228689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.151 [2024-10-17 19:29:56.228701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.151 [2024-10-17 19:29:56.233506] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.151 [2024-10-17 19:29:56.233540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.151 [2024-10-17 19:29:56.233553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.151 [2024-10-17 19:29:56.238395] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.151 [2024-10-17 19:29:56.238429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.151 [2024-10-17 19:29:56.238442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.151 [2024-10-17 19:29:56.243331] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.151 [2024-10-17 19:29:56.243364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.151 [2024-10-17 19:29:56.243377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.151 [2024-10-17 19:29:56.248223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.151 [2024-10-17 19:29:56.248252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.151 [2024-10-17 19:29:56.248288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.151 [2024-10-17 19:29:56.253216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.151 [2024-10-17 19:29:56.253253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.151 [2024-10-17 19:29:56.253265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.151 [2024-10-17 19:29:56.258140] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.151 [2024-10-17 19:29:56.258172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.151 [2024-10-17 19:29:56.258185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.151 [2024-10-17 19:29:56.263086] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.151 [2024-10-17 19:29:56.263125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.151 [2024-10-17 19:29:56.263156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.151 [2024-10-17 19:29:56.268033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.151 [2024-10-17 19:29:56.268069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.151 [2024-10-17 19:29:56.268082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.151 [2024-10-17 19:29:56.273152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.151 [2024-10-17 19:29:56.273187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.151 [2024-10-17 19:29:56.273200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.151 [2024-10-17 19:29:56.278197] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.151 [2024-10-17 19:29:56.278245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.151 [2024-10-17 19:29:56.278258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.151 [2024-10-17 19:29:56.283191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.151 [2024-10-17 19:29:56.283227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.151 [2024-10-17 19:29:56.283240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.151 [2024-10-17 19:29:56.288176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.151 [2024-10-17 19:29:56.288211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.151 [2024-10-17 19:29:56.288224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.151 [2024-10-17 19:29:56.293267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.151 [2024-10-17 19:29:56.293303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.151 [2024-10-17 19:29:56.293317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.151 [2024-10-17 19:29:56.298296] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.151 [2024-10-17 19:29:56.298331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.151 [2024-10-17 19:29:56.298344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.151 [2024-10-17 19:29:56.303396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.151 [2024-10-17 19:29:56.303432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.151 [2024-10-17 19:29:56.303445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.151 [2024-10-17 19:29:56.308570] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.151 [2024-10-17 19:29:56.308606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.151 [2024-10-17 19:29:56.308620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.151 [2024-10-17 19:29:56.313649] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.151 [2024-10-17 19:29:56.313686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.151 [2024-10-17 19:29:56.313700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.151 [2024-10-17 19:29:56.318703] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.151 [2024-10-17 19:29:56.318740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.151 [2024-10-17 19:29:56.318752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.151 [2024-10-17 19:29:56.323785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.152 [2024-10-17 19:29:56.323821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.152 [2024-10-17 19:29:56.323834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.152 [2024-10-17 19:29:56.328766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.152 [2024-10-17 19:29:56.328802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.152 [2024-10-17 19:29:56.328815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.152 [2024-10-17 19:29:56.333852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.152 [2024-10-17 19:29:56.333888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.152 [2024-10-17 19:29:56.333901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.152 [2024-10-17 19:29:56.338900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.152 [2024-10-17 19:29:56.338946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.152 [2024-10-17 19:29:56.338960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.152 [2024-10-17 19:29:56.343982] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.152 [2024-10-17 19:29:56.344020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.152 [2024-10-17 19:29:56.344033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.152 [2024-10-17 19:29:56.348945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.152 [2024-10-17 19:29:56.348980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.152 [2024-10-17 19:29:56.348994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.152 [2024-10-17 19:29:56.354042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.152 [2024-10-17 19:29:56.354079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.152 [2024-10-17 19:29:56.354092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.152 [2024-10-17 19:29:56.359054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.152 [2024-10-17 19:29:56.359091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.152 [2024-10-17 19:29:56.359104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.152 [2024-10-17 19:29:56.364157] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.152 [2024-10-17 19:29:56.364194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.152 [2024-10-17 19:29:56.364207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.152 [2024-10-17 19:29:56.369203] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.152 [2024-10-17 19:29:56.369239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.152 [2024-10-17 19:29:56.369252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.152 [2024-10-17 19:29:56.374218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.152 [2024-10-17 19:29:56.374252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.152 [2024-10-17 19:29:56.374265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.152 [2024-10-17 19:29:56.379186] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.152 [2024-10-17 19:29:56.379221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.152 [2024-10-17 19:29:56.379234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.152 [2024-10-17 19:29:56.384170] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.152 [2024-10-17 19:29:56.384204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.152 [2024-10-17 19:29:56.384217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.152 [2024-10-17 19:29:56.389193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.152 [2024-10-17 19:29:56.389226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.152 [2024-10-17 19:29:56.389239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.152 [2024-10-17 19:29:56.394223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.152 [2024-10-17 19:29:56.394258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.152 [2024-10-17 19:29:56.394270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.152 [2024-10-17 19:29:56.399235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.152 [2024-10-17 19:29:56.399269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.152 [2024-10-17 19:29:56.399281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.152 [2024-10-17 19:29:56.404409] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.152 [2024-10-17 19:29:56.404445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.152 [2024-10-17 19:29:56.404459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.411 [2024-10-17 19:29:56.409615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.411 [2024-10-17 19:29:56.409653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.411 [2024-10-17 19:29:56.409666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.411 [2024-10-17 19:29:56.414575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.411 [2024-10-17 19:29:56.414612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.411 [2024-10-17 19:29:56.414626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.411 [2024-10-17 19:29:56.419557] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.411 [2024-10-17 19:29:56.419594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.411 [2024-10-17 19:29:56.419607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.411 [2024-10-17 19:29:56.424580] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.411 [2024-10-17 19:29:56.424618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.411 [2024-10-17 19:29:56.424631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.411 [2024-10-17 19:29:56.429559] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.411 [2024-10-17 19:29:56.429595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.411 [2024-10-17 19:29:56.429608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.411 [2024-10-17 19:29:56.434576] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.411 [2024-10-17 19:29:56.434612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.411 [2024-10-17 19:29:56.434625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.411 [2024-10-17 19:29:56.439615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.412 [2024-10-17 19:29:56.439652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.412 [2024-10-17 19:29:56.439665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.412 [2024-10-17 19:29:56.444558] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.412 [2024-10-17 19:29:56.444594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.412 [2024-10-17 19:29:56.444607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.412 [2024-10-17 19:29:56.449565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.412 [2024-10-17 19:29:56.449601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.412 [2024-10-17 19:29:56.449614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.412 [2024-10-17 19:29:56.454742] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.412 [2024-10-17 19:29:56.454779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.412 [2024-10-17 19:29:56.454792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.412 [2024-10-17 19:29:56.459849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.412 [2024-10-17 19:29:56.459885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.412 [2024-10-17 19:29:56.459898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.412 [2024-10-17 19:29:56.465091] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.412 [2024-10-17 19:29:56.465142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.412 [2024-10-17 19:29:56.465157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.412 [2024-10-17 19:29:56.470435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.412 [2024-10-17 19:29:56.470472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.412 [2024-10-17 19:29:56.470486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.412 [2024-10-17 19:29:56.475602] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.412 [2024-10-17 19:29:56.475641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.412 [2024-10-17 19:29:56.475654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.412 [2024-10-17 19:29:56.480740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.412 [2024-10-17 19:29:56.480777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.412 [2024-10-17 19:29:56.480790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.412 [2024-10-17 19:29:56.485746] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.412 [2024-10-17 19:29:56.485782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.412 [2024-10-17 19:29:56.485795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.412 [2024-10-17 19:29:56.490795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.412 [2024-10-17 19:29:56.490835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.412 [2024-10-17 19:29:56.490850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.412 [2024-10-17 19:29:56.495830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.412 [2024-10-17 19:29:56.495865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.412 [2024-10-17 19:29:56.495878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.412 [2024-10-17 19:29:56.500749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.412 [2024-10-17 19:29:56.500783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.412 [2024-10-17 19:29:56.500795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.412 [2024-10-17 19:29:56.505689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.412 [2024-10-17 19:29:56.505724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.412 [2024-10-17 19:29:56.505737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.412 [2024-10-17 19:29:56.510578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.412 [2024-10-17 19:29:56.510612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.412 [2024-10-17 19:29:56.510625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.412 [2024-10-17 19:29:56.515486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.412 [2024-10-17 19:29:56.515520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.412 [2024-10-17 19:29:56.515533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.412 [2024-10-17 19:29:56.520303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.412 [2024-10-17 19:29:56.520336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.412 [2024-10-17 19:29:56.520349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.412 [2024-10-17 19:29:56.525207] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.412 [2024-10-17 19:29:56.525241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.412 [2024-10-17 19:29:56.525254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.412 [2024-10-17 19:29:56.530111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.412 [2024-10-17 19:29:56.530157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.412 [2024-10-17 19:29:56.530170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.412 [2024-10-17 19:29:56.535049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.412 [2024-10-17 19:29:56.535086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.412 [2024-10-17 19:29:56.535099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.412 [2024-10-17 19:29:56.539941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.412 [2024-10-17 19:29:56.539977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.412 [2024-10-17 19:29:56.539991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.412 [2024-10-17 19:29:56.544952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.412 [2024-10-17 19:29:56.544989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.412 [2024-10-17 19:29:56.545003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.412 [2024-10-17 19:29:56.550100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.412 [2024-10-17 19:29:56.550151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.412 [2024-10-17 19:29:56.550165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.412 [2024-10-17 19:29:56.555036] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.412 [2024-10-17 19:29:56.555073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.412 [2024-10-17 19:29:56.555087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.412 [2024-10-17 19:29:56.560008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.412 [2024-10-17 19:29:56.560045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.412 [2024-10-17 19:29:56.560058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.412 [2024-10-17 19:29:56.564918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.412 [2024-10-17 19:29:56.564954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.412 [2024-10-17 19:29:56.564967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.412 [2024-10-17 19:29:56.569862] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.412 [2024-10-17 19:29:56.569898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.412 [2024-10-17 19:29:56.569911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.412 [2024-10-17 19:29:56.574847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.412 [2024-10-17 19:29:56.574882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.412 [2024-10-17 19:29:56.574896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.412 [2024-10-17 19:29:56.579816] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.412 [2024-10-17 19:29:56.579852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.413 [2024-10-17 19:29:56.579865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.413 [2024-10-17 19:29:56.584871] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.413 [2024-10-17 19:29:56.584908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.413 [2024-10-17 19:29:56.584921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.413 [2024-10-17 19:29:56.589895] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.413 [2024-10-17 19:29:56.589931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.413 [2024-10-17 19:29:56.589945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.413 [2024-10-17 19:29:56.594998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.413 [2024-10-17 19:29:56.595036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.413 [2024-10-17 19:29:56.595049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.413 [2024-10-17 19:29:56.600186] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.413 [2024-10-17 19:29:56.600223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.413 [2024-10-17 19:29:56.600237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.413 6231.00 IOPS, 778.88 MiB/s [2024-10-17T19:29:56.671Z] [2024-10-17 19:29:56.606879] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.413 [2024-10-17 19:29:56.606923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.413 [2024-10-17 19:29:56.606937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.413 [2024-10-17 19:29:56.611871] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.413 [2024-10-17 19:29:56.611911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.413 [2024-10-17 19:29:56.611924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.413 [2024-10-17 19:29:56.616871] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.413 [2024-10-17 19:29:56.616911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.413 [2024-10-17 19:29:56.616924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.413 [2024-10-17 19:29:56.621783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.413 [2024-10-17 19:29:56.621820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.413 [2024-10-17 19:29:56.621834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.413 [2024-10-17 19:29:56.626757] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.413 [2024-10-17 19:29:56.626794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.413 [2024-10-17 19:29:56.626807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.413 [2024-10-17 19:29:56.631825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.413 [2024-10-17 19:29:56.631864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.413 [2024-10-17 19:29:56.631877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.413 [2024-10-17 19:29:56.636981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.413 [2024-10-17 19:29:56.637017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.413 [2024-10-17 19:29:56.637030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.413 [2024-10-17 19:29:56.642297] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.413 [2024-10-17 19:29:56.642334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.413 [2024-10-17 19:29:56.642348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.413 [2024-10-17 19:29:56.647484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.413 [2024-10-17 19:29:56.647519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.413 [2024-10-17 19:29:56.647542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.413 [2024-10-17 19:29:56.652601] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.413 [2024-10-17 19:29:56.652636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.413 [2024-10-17 19:29:56.652649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.413 [2024-10-17 19:29:56.657762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.413 [2024-10-17 19:29:56.657802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.413 [2024-10-17 19:29:56.657816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.413 [2024-10-17 19:29:56.662796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.413 [2024-10-17 19:29:56.662833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.413 [2024-10-17 19:29:56.662847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.413 [2024-10-17 19:29:56.667772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.413 [2024-10-17 19:29:56.667808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.413 [2024-10-17 19:29:56.667821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.673 [2024-10-17 19:29:56.672706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.673 [2024-10-17 19:29:56.672753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.673 [2024-10-17 19:29:56.672766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.673 [2024-10-17 19:29:56.677683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.673 [2024-10-17 19:29:56.677719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.673 [2024-10-17 19:29:56.677732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.673 [2024-10-17 19:29:56.682666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.673 [2024-10-17 19:29:56.682702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.673 [2024-10-17 19:29:56.682716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.673 [2024-10-17 19:29:56.687665] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.673 [2024-10-17 19:29:56.687699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.673 [2024-10-17 19:29:56.687712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.673 [2024-10-17 19:29:56.692684] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.673 [2024-10-17 19:29:56.692721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.673 [2024-10-17 19:29:56.692735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.673 [2024-10-17 19:29:56.697638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.673 [2024-10-17 19:29:56.697674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.673 [2024-10-17 19:29:56.697687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.673 [2024-10-17 19:29:56.702719] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.673 [2024-10-17 19:29:56.702756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.673 [2024-10-17 19:29:56.702769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.673 [2024-10-17 19:29:56.707660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.673 [2024-10-17 19:29:56.707695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.673 [2024-10-17 19:29:56.707709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.673 [2024-10-17 19:29:56.712514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.673 [2024-10-17 19:29:56.712549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.673 [2024-10-17 19:29:56.712562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.673 [2024-10-17 19:29:56.717526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.673 [2024-10-17 19:29:56.717562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.673 [2024-10-17 19:29:56.717575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.673 [2024-10-17 19:29:56.722438] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.673 [2024-10-17 19:29:56.722478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.673 [2024-10-17 19:29:56.722492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.673 [2024-10-17 19:29:56.727411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.673 [2024-10-17 19:29:56.727449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.673 [2024-10-17 19:29:56.727462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.673 [2024-10-17 19:29:56.732342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.673 [2024-10-17 19:29:56.732380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.673 [2024-10-17 19:29:56.732393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.673 [2024-10-17 19:29:56.737312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.673 [2024-10-17 19:29:56.737347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.673 [2024-10-17 19:29:56.737360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.673 [2024-10-17 19:29:56.742308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.673 [2024-10-17 19:29:56.742342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.673 [2024-10-17 19:29:56.742356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.673 [2024-10-17 19:29:56.747241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.673 [2024-10-17 19:29:56.747274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.673 [2024-10-17 19:29:56.747287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.673 [2024-10-17 19:29:56.752680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.673 [2024-10-17 19:29:56.752716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.673 [2024-10-17 19:29:56.752730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.673 [2024-10-17 19:29:56.757568] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.673 [2024-10-17 19:29:56.757603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.673 [2024-10-17 19:29:56.757616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.673 [2024-10-17 19:29:56.762478] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.673 [2024-10-17 19:29:56.762514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.673 [2024-10-17 19:29:56.762528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.674 [2024-10-17 19:29:56.767341] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.674 [2024-10-17 19:29:56.767375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.674 [2024-10-17 19:29:56.767388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.674 [2024-10-17 19:29:56.772307] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.674 [2024-10-17 19:29:56.772340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.674 [2024-10-17 19:29:56.772353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.674 [2024-10-17 19:29:56.777181] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.674 [2024-10-17 19:29:56.777214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.674 [2024-10-17 19:29:56.777227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.674 [2024-10-17 19:29:56.782107] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.674 [2024-10-17 19:29:56.782154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.674 [2024-10-17 19:29:56.782168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.674 [2024-10-17 19:29:56.787206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.674 [2024-10-17 19:29:56.787244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.674 [2024-10-17 19:29:56.787261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.674 [2024-10-17 19:29:56.792083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.674 [2024-10-17 19:29:56.792119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.674 [2024-10-17 19:29:56.792150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.674 [2024-10-17 19:29:56.797095] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.674 [2024-10-17 19:29:56.797145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.674 [2024-10-17 19:29:56.797160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.674 [2024-10-17 19:29:56.802120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.674 [2024-10-17 19:29:56.802167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.674 [2024-10-17 19:29:56.802181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.674 [2024-10-17 19:29:56.807097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.674 [2024-10-17 19:29:56.807144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.674 [2024-10-17 19:29:56.807158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.674 [2024-10-17 19:29:56.811989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.674 [2024-10-17 19:29:56.812024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.674 [2024-10-17 19:29:56.812037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.674 [2024-10-17 19:29:56.816955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.674 [2024-10-17 19:29:56.816993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.674 [2024-10-17 19:29:56.817006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.674 [2024-10-17 19:29:56.821875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.674 [2024-10-17 19:29:56.821909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.674 [2024-10-17 19:29:56.821922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.674 [2024-10-17 19:29:56.826737] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.674 [2024-10-17 19:29:56.826771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.674 [2024-10-17 19:29:56.826785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.674 [2024-10-17 19:29:56.831671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.674 [2024-10-17 19:29:56.831706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.674 [2024-10-17 19:29:56.831719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.674 [2024-10-17 19:29:56.836569] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.674 [2024-10-17 19:29:56.836609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.674 [2024-10-17 19:29:56.836622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.674 [2024-10-17 19:29:56.841494] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.674 [2024-10-17 19:29:56.841531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.674 [2024-10-17 19:29:56.841544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.674 [2024-10-17 19:29:56.846347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.674 [2024-10-17 19:29:56.846381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.674 [2024-10-17 19:29:56.846395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.674 [2024-10-17 19:29:56.851244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.674 [2024-10-17 19:29:56.851278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.674 [2024-10-17 19:29:56.851290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.674 [2024-10-17 19:29:56.856160] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.674 [2024-10-17 19:29:56.856189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.674 [2024-10-17 19:29:56.856202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.674 [2024-10-17 19:29:56.861016] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.674 [2024-10-17 19:29:56.861049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.674 [2024-10-17 19:29:56.861062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.674 [2024-10-17 19:29:56.866045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.674 [2024-10-17 19:29:56.866084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.674 [2024-10-17 19:29:56.866097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.674 [2024-10-17 19:29:56.870964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.674 [2024-10-17 19:29:56.870998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.674 [2024-10-17 19:29:56.871010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.674 [2024-10-17 19:29:56.875979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.674 [2024-10-17 19:29:56.876017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.674 [2024-10-17 19:29:56.876029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.674 [2024-10-17 19:29:56.880873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.674 [2024-10-17 19:29:56.880907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.674 [2024-10-17 19:29:56.880920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.674 [2024-10-17 19:29:56.885746] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.674 [2024-10-17 19:29:56.885781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.674 [2024-10-17 19:29:56.885794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.674 [2024-10-17 19:29:56.890688] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.674 [2024-10-17 19:29:56.890721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.674 [2024-10-17 19:29:56.890734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.675 [2024-10-17 19:29:56.895517] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.675 [2024-10-17 19:29:56.895552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.675 [2024-10-17 19:29:56.895566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.675 [2024-10-17 19:29:56.900372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.675 [2024-10-17 19:29:56.900406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.675 [2024-10-17 19:29:56.900419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.675 [2024-10-17 19:29:56.905235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.675 [2024-10-17 19:29:56.905268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.675 [2024-10-17 19:29:56.905281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.675 [2024-10-17 19:29:56.910083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.675 [2024-10-17 19:29:56.910117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.675 [2024-10-17 19:29:56.910142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.675 [2024-10-17 19:29:56.915277] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.675 [2024-10-17 19:29:56.915310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.675 [2024-10-17 19:29:56.915322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.675 [2024-10-17 19:29:56.920248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.675 [2024-10-17 19:29:56.920290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.675 [2024-10-17 19:29:56.920304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.675 [2024-10-17 19:29:56.925350] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.675 [2024-10-17 19:29:56.925384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.675 [2024-10-17 19:29:56.925397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.934 [2024-10-17 19:29:56.930522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.934 [2024-10-17 19:29:56.930555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.934 [2024-10-17 19:29:56.930568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.934 [2024-10-17 19:29:56.935611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.934 [2024-10-17 19:29:56.935647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.934 [2024-10-17 19:29:56.935660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.934 [2024-10-17 19:29:56.940761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.934 [2024-10-17 19:29:56.940794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.934 [2024-10-17 19:29:56.940806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.934 [2024-10-17 19:29:56.945829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.934 [2024-10-17 19:29:56.945863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.934 [2024-10-17 19:29:56.945876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.934 [2024-10-17 19:29:56.951035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.934 [2024-10-17 19:29:56.951072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.934 [2024-10-17 19:29:56.951085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.934 [2024-10-17 19:29:56.956050] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.934 [2024-10-17 19:29:56.956084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.934 [2024-10-17 19:29:56.956098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.934 [2024-10-17 19:29:56.961010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.934 [2024-10-17 19:29:56.961044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.934 [2024-10-17 19:29:56.961057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.934 [2024-10-17 19:29:56.966026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.934 [2024-10-17 19:29:56.966060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.934 [2024-10-17 19:29:56.966073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.934 [2024-10-17 19:29:56.971067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.934 [2024-10-17 19:29:56.971113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.934 [2024-10-17 19:29:56.971126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.934 [2024-10-17 19:29:56.976197] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.934 [2024-10-17 19:29:56.976230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.934 [2024-10-17 19:29:56.976244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.934 [2024-10-17 19:29:56.981223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.934 [2024-10-17 19:29:56.981254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.934 [2024-10-17 19:29:56.981266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.934 [2024-10-17 19:29:56.986145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.934 [2024-10-17 19:29:56.986178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.934 [2024-10-17 19:29:56.986191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.934 [2024-10-17 19:29:56.991069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.934 [2024-10-17 19:29:56.991104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.934 [2024-10-17 19:29:56.991117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.934 [2024-10-17 19:29:56.996097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.934 [2024-10-17 19:29:56.996148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:56.996163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.001180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.001210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:57.001223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.006149] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.006181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:57.006193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.011022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.011059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:57.011072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.015959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.015993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:57.016005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.020957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.020991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:57.021004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.025811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.025845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:57.025857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.030696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.030730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:57.030743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.035576] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.035611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:57.035624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.040461] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.040495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:57.040508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.045450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.045484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:57.045497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.050310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.050342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:57.050354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.055211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.055244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:57.055257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.060008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.060042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:57.060054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.064935] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.064970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:57.064982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.069803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.069836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:57.069849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.074721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.074754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:57.074767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.079544] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.079577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:57.079589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.084559] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.084593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:57.084606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.089430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.089464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:57.089476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.094315] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.094348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:57.094360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.099169] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.099202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:57.099214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.104018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.104052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:57.104066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.108887] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.108920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:57.108932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.113735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.113769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:57.113782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.118649] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.118682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:57.118695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.123502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.123536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:57.123549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.128428] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.128464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:57.128477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.133350] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.133384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:57.133397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.138293] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.138326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.935 [2024-10-17 19:29:57.138339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.935 [2024-10-17 19:29:57.143227] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.935 [2024-10-17 19:29:57.143261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.936 [2024-10-17 19:29:57.143273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.936 [2024-10-17 19:29:57.148145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.936 [2024-10-17 19:29:57.148178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.936 [2024-10-17 19:29:57.148190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.936 [2024-10-17 19:29:57.153063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.936 [2024-10-17 19:29:57.153097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.936 [2024-10-17 19:29:57.153110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.936 [2024-10-17 19:29:57.157941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.936 [2024-10-17 19:29:57.157976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.936 [2024-10-17 19:29:57.157988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.936 [2024-10-17 19:29:57.162911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.936 [2024-10-17 19:29:57.162945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.936 [2024-10-17 19:29:57.162958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.936 [2024-10-17 19:29:57.167822] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.936 [2024-10-17 19:29:57.167857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.936 [2024-10-17 19:29:57.167869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.936 [2024-10-17 19:29:57.172740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.936 [2024-10-17 19:29:57.172774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.936 [2024-10-17 19:29:57.172787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.936 [2024-10-17 19:29:57.177645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.936 [2024-10-17 19:29:57.177679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.936 [2024-10-17 19:29:57.177691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.936 [2024-10-17 19:29:57.182547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.936 [2024-10-17 19:29:57.182581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.936 [2024-10-17 19:29:57.182595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.936 [2024-10-17 19:29:57.187515] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:47.936 [2024-10-17 19:29:57.187551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.936 [2024-10-17 19:29:57.187564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.195 [2024-10-17 19:29:57.192416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.195 [2024-10-17 19:29:57.192451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.195 [2024-10-17 19:29:57.192463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.195 [2024-10-17 19:29:57.197345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.195 [2024-10-17 19:29:57.197382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.195 [2024-10-17 19:29:57.197396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.195 [2024-10-17 19:29:57.202245] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.195 [2024-10-17 19:29:57.202279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.195 [2024-10-17 19:29:57.202292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.195 [2024-10-17 19:29:57.207113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.195 [2024-10-17 19:29:57.207157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.195 [2024-10-17 19:29:57.207170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.195 [2024-10-17 19:29:57.212002] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.195 [2024-10-17 19:29:57.212036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.195 [2024-10-17 19:29:57.212048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.195 [2024-10-17 19:29:57.216872] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.195 [2024-10-17 19:29:57.216906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.195 [2024-10-17 19:29:57.216918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.195 [2024-10-17 19:29:57.221789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.195 [2024-10-17 19:29:57.221823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.195 [2024-10-17 19:29:57.221835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.195 [2024-10-17 19:29:57.226849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.195 [2024-10-17 19:29:57.226892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.195 [2024-10-17 19:29:57.226906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.195 [2024-10-17 19:29:57.231712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.195 [2024-10-17 19:29:57.231747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.195 [2024-10-17 19:29:57.231760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.195 [2024-10-17 19:29:57.236649] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.195 [2024-10-17 19:29:57.236684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.195 [2024-10-17 19:29:57.236697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.195 [2024-10-17 19:29:57.241516] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.195 [2024-10-17 19:29:57.241550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.195 [2024-10-17 19:29:57.241564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.195 [2024-10-17 19:29:57.246459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.195 [2024-10-17 19:29:57.246494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.195 [2024-10-17 19:29:57.246507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.195 [2024-10-17 19:29:57.251403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.195 [2024-10-17 19:29:57.251437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.195 [2024-10-17 19:29:57.251450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.195 [2024-10-17 19:29:57.256291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.195 [2024-10-17 19:29:57.256324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.195 [2024-10-17 19:29:57.256337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.195 [2024-10-17 19:29:57.261145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.195 [2024-10-17 19:29:57.261178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.195 [2024-10-17 19:29:57.261191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.195 [2024-10-17 19:29:57.266069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.195 [2024-10-17 19:29:57.266103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.195 [2024-10-17 19:29:57.266116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.195 [2024-10-17 19:29:57.270973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.195 [2024-10-17 19:29:57.271009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.195 [2024-10-17 19:29:57.271022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.195 [2024-10-17 19:29:57.275997] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.195 [2024-10-17 19:29:57.276033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.195 [2024-10-17 19:29:57.276047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.195 [2024-10-17 19:29:57.281213] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.195 [2024-10-17 19:29:57.281254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.195 [2024-10-17 19:29:57.281268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.195 [2024-10-17 19:29:57.286193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.195 [2024-10-17 19:29:57.286229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.195 [2024-10-17 19:29:57.286242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.195 [2024-10-17 19:29:57.291166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.195 [2024-10-17 19:29:57.291217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.195 [2024-10-17 19:29:57.291239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.195 [2024-10-17 19:29:57.296085] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.195 [2024-10-17 19:29:57.296123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.195 [2024-10-17 19:29:57.296149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.195 [2024-10-17 19:29:57.300973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.195 [2024-10-17 19:29:57.301008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.195 [2024-10-17 19:29:57.301021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.195 [2024-10-17 19:29:57.305845] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.195 [2024-10-17 19:29:57.305880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.195 [2024-10-17 19:29:57.305893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.195 [2024-10-17 19:29:57.310741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.195 [2024-10-17 19:29:57.310777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.195 [2024-10-17 19:29:57.310791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.195 [2024-10-17 19:29:57.315750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.195 [2024-10-17 19:29:57.315784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.195 [2024-10-17 19:29:57.315797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.196 [2024-10-17 19:29:57.320604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.196 [2024-10-17 19:29:57.320639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.196 [2024-10-17 19:29:57.320652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.196 [2024-10-17 19:29:57.325574] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.196 [2024-10-17 19:29:57.325610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.196 [2024-10-17 19:29:57.325622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.196 [2024-10-17 19:29:57.330547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.196 [2024-10-17 19:29:57.330582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.196 [2024-10-17 19:29:57.330595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.196 [2024-10-17 19:29:57.335469] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.196 [2024-10-17 19:29:57.335503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.196 [2024-10-17 19:29:57.335517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.196 [2024-10-17 19:29:57.340375] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.196 [2024-10-17 19:29:57.340409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.196 [2024-10-17 19:29:57.340421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.196 [2024-10-17 19:29:57.345317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.196 [2024-10-17 19:29:57.345350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.196 [2024-10-17 19:29:57.345363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.196 [2024-10-17 19:29:57.350216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.196 [2024-10-17 19:29:57.350251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.196 [2024-10-17 19:29:57.350264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.196 [2024-10-17 19:29:57.355185] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.196 [2024-10-17 19:29:57.355220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.196 [2024-10-17 19:29:57.355233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.196 [2024-10-17 19:29:57.360086] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.196 [2024-10-17 19:29:57.360120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.196 [2024-10-17 19:29:57.360145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.196 [2024-10-17 19:29:57.364978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.196 [2024-10-17 19:29:57.365011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.196 [2024-10-17 19:29:57.365024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.196 [2024-10-17 19:29:57.369855] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.196 [2024-10-17 19:29:57.369890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.196 [2024-10-17 19:29:57.369903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.196 [2024-10-17 19:29:57.374783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.196 [2024-10-17 19:29:57.374817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.196 [2024-10-17 19:29:57.374830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.196 [2024-10-17 19:29:57.379739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.196 [2024-10-17 19:29:57.379773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.196 [2024-10-17 19:29:57.379787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.196 [2024-10-17 19:29:57.384689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.196 [2024-10-17 19:29:57.384724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.196 [2024-10-17 19:29:57.384737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.196 [2024-10-17 19:29:57.389608] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.196 [2024-10-17 19:29:57.389644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.196 [2024-10-17 19:29:57.389658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.196 [2024-10-17 19:29:57.394762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.196 [2024-10-17 19:29:57.394797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.196 [2024-10-17 19:29:57.394810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.196 [2024-10-17 19:29:57.399636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.196 [2024-10-17 19:29:57.399671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.196 [2024-10-17 19:29:57.399684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.196 [2024-10-17 19:29:57.404484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.196 [2024-10-17 19:29:57.404518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.196 [2024-10-17 19:29:57.404532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.196 [2024-10-17 19:29:57.409416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.196 [2024-10-17 19:29:57.409450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.196 [2024-10-17 19:29:57.409463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.196 [2024-10-17 19:29:57.414331] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.196 [2024-10-17 19:29:57.414366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.196 [2024-10-17 19:29:57.414378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.196 [2024-10-17 19:29:57.419206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.196 [2024-10-17 19:29:57.419239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.196 [2024-10-17 19:29:57.419251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.196 [2024-10-17 19:29:57.424197] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.196 [2024-10-17 19:29:57.424231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.196 [2024-10-17 19:29:57.424244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.196 [2024-10-17 19:29:57.429156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.196 [2024-10-17 19:29:57.429191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.196 [2024-10-17 19:29:57.429204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.196 [2024-10-17 19:29:57.434052] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.196 [2024-10-17 19:29:57.434085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.196 [2024-10-17 19:29:57.434098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.196 [2024-10-17 19:29:57.438962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.196 [2024-10-17 19:29:57.438997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.196 [2024-10-17 19:29:57.439010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.196 [2024-10-17 19:29:57.443850] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.196 [2024-10-17 19:29:57.443890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.196 [2024-10-17 19:29:57.443903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.196 [2024-10-17 19:29:57.448757] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.196 [2024-10-17 19:29:57.448800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.196 [2024-10-17 19:29:57.448813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.454 [2024-10-17 19:29:57.453813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.454 [2024-10-17 19:29:57.453853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.454 [2024-10-17 19:29:57.453867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.454 [2024-10-17 19:29:57.458820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.454 [2024-10-17 19:29:57.458859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.454 [2024-10-17 19:29:57.458872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.454 [2024-10-17 19:29:57.463791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.454 [2024-10-17 19:29:57.463828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.454 [2024-10-17 19:29:57.463841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.454 [2024-10-17 19:29:57.468725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.454 [2024-10-17 19:29:57.468760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.454 [2024-10-17 19:29:57.468773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.454 [2024-10-17 19:29:57.473677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.454 [2024-10-17 19:29:57.473713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.454 [2024-10-17 19:29:57.473726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.454 [2024-10-17 19:29:57.478754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.454 [2024-10-17 19:29:57.478796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.454 [2024-10-17 19:29:57.478810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.455 [2024-10-17 19:29:57.484038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.455 [2024-10-17 19:29:57.484079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.455 [2024-10-17 19:29:57.484093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.455 [2024-10-17 19:29:57.489018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.455 [2024-10-17 19:29:57.489055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.455 [2024-10-17 19:29:57.489069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.455 [2024-10-17 19:29:57.493975] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.455 [2024-10-17 19:29:57.494023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.455 [2024-10-17 19:29:57.494037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.455 [2024-10-17 19:29:57.498965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.455 [2024-10-17 19:29:57.499003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.455 [2024-10-17 19:29:57.499016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.455 [2024-10-17 19:29:57.503880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.455 [2024-10-17 19:29:57.503916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.455 [2024-10-17 19:29:57.503929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.455 [2024-10-17 19:29:57.508954] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.455 [2024-10-17 19:29:57.508997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.455 [2024-10-17 19:29:57.509012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.455 [2024-10-17 19:29:57.513975] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.455 [2024-10-17 19:29:57.514025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.455 [2024-10-17 19:29:57.514039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.455 [2024-10-17 19:29:57.518974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.455 [2024-10-17 19:29:57.519013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.455 [2024-10-17 19:29:57.519028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.455 [2024-10-17 19:29:57.524046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.455 [2024-10-17 19:29:57.524087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.455 [2024-10-17 19:29:57.524101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.455 [2024-10-17 19:29:57.528966] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.455 [2024-10-17 19:29:57.529004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.455 [2024-10-17 19:29:57.529017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.455 [2024-10-17 19:29:57.533874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.455 [2024-10-17 19:29:57.533911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.455 [2024-10-17 19:29:57.533925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.455 [2024-10-17 19:29:57.538853] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.455 [2024-10-17 19:29:57.538888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.455 [2024-10-17 19:29:57.538902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.455 [2024-10-17 19:29:57.543707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.455 [2024-10-17 19:29:57.543742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.455 [2024-10-17 19:29:57.543755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.455 [2024-10-17 19:29:57.548671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.455 [2024-10-17 19:29:57.548710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.455 [2024-10-17 19:29:57.548724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.455 [2024-10-17 19:29:57.553689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.455 [2024-10-17 19:29:57.553729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.455 [2024-10-17 19:29:57.553743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.455 [2024-10-17 19:29:57.558598] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.455 [2024-10-17 19:29:57.558635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.455 [2024-10-17 19:29:57.558648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.455 [2024-10-17 19:29:57.563495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.455 [2024-10-17 19:29:57.563530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.455 [2024-10-17 19:29:57.563544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.455 [2024-10-17 19:29:57.568357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.455 [2024-10-17 19:29:57.568393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.455 [2024-10-17 19:29:57.568406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.455 [2024-10-17 19:29:57.573398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.455 [2024-10-17 19:29:57.573434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.455 [2024-10-17 19:29:57.573447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.455 [2024-10-17 19:29:57.578395] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.455 [2024-10-17 19:29:57.578431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.455 [2024-10-17 19:29:57.578445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.455 [2024-10-17 19:29:57.583290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.455 [2024-10-17 19:29:57.583324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.455 [2024-10-17 19:29:57.583337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.455 [2024-10-17 19:29:57.588234] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.455 [2024-10-17 19:29:57.588269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.455 [2024-10-17 19:29:57.588283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.455 [2024-10-17 19:29:57.593197] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.455 [2024-10-17 19:29:57.593232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.455 [2024-10-17 19:29:57.593245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.455 [2024-10-17 19:29:57.598115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.455 [2024-10-17 19:29:57.598160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.455 [2024-10-17 19:29:57.598174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.455 6239.00 IOPS, 779.88 MiB/s [2024-10-17T19:29:57.713Z] [2024-10-17 19:29:57.604715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9b9210) 00:27:48.455 [2024-10-17 19:29:57.604754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.455 [2024-10-17 19:29:57.604768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.455 00:27:48.455 Latency(us) 00:27:48.455 [2024-10-17T19:29:57.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.455 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:48.455 nvme0n1 : 2.00 6238.11 779.76 0.00 0.00 2561.25 2323.55 6881.28 00:27:48.455 [2024-10-17T19:29:57.713Z] =================================================================================================================== 00:27:48.455 [2024-10-17T19:29:57.713Z] Total : 6238.11 779.76 0.00 0.00 2561.25 2323.55 6881.28 00:27:48.455 { 00:27:48.455 "results": [ 00:27:48.455 { 00:27:48.455 "job": "nvme0n1", 00:27:48.455 "core_mask": "0x2", 00:27:48.455 "workload": "randread", 00:27:48.455 "status": "finished", 00:27:48.455 "queue_depth": 16, 00:27:48.455 "io_size": 131072, 00:27:48.455 "runtime": 2.002849, 00:27:48.455 "iops": 6238.1138068820965, 00:27:48.455 "mibps": 779.7642258602621, 00:27:48.455 "io_failed": 0, 00:27:48.455 "io_timeout": 0, 00:27:48.455 "avg_latency_us": 2561.2484642810364, 00:27:48.455 "min_latency_us": 2323.549090909091, 00:27:48.455 "max_latency_us": 6881.28 00:27:48.455 } 00:27:48.455 ], 00:27:48.455 "core_count": 1 00:27:48.455 } 00:27:48.455 19:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:48.455 19:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:48.455 19:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:48.455 19:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:48.455 | .driver_specific 00:27:48.455 | .nvme_error 00:27:48.455 | .status_code 00:27:48.455 | .command_transient_transport_error' 00:27:49.039 19:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 403 > 0 )) 00:27:49.039 19:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80502 00:27:49.039 19:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 80502 ']' 00:27:49.039 19:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 80502 00:27:49.039 19:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:49.039 19:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:49.039 19:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80502 00:27:49.039 19:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:49.039 19:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:49.039 killing process with pid 80502 00:27:49.039 19:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80502' 00:27:49.039 19:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 80502 00:27:49.039 Received shutdown signal, test time was about 2.000000 seconds 00:27:49.039 00:27:49.039 Latency(us) 00:27:49.039 [2024-10-17T19:29:58.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:49.039 [2024-10-17T19:29:58.297Z] =================================================================================================================== 00:27:49.039 [2024-10-17T19:29:58.297Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:49.039 19:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 80502 00:27:49.039 19:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:49.039 19:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:49.039 19:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:49.039 19:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:49.039 19:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:49.039 19:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80556 00:27:49.039 19:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:49.039 19:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80556 /var/tmp/bperf.sock 00:27:49.039 19:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 80556 ']' 00:27:49.039 19:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:49.039 19:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:49.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:49.039 19:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:49.039 19:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:49.039 19:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:49.039 [2024-10-17 19:29:58.279056] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:27:49.039 [2024-10-17 19:29:58.279193] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80556 ] 00:27:49.301 [2024-10-17 19:29:58.418363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.301 [2024-10-17 19:29:58.483664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.301 [2024-10-17 19:29:58.538045] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:49.558 19:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:49.558 19:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:49.558 19:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:49.558 19:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:49.817 19:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:49.817 19:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.817 19:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:49.817 19:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.817 19:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:49.817 19:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:50.076 nvme0n1 00:27:50.076 19:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:50.076 19:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.076 19:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:50.076 19:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.076 19:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:50.076 19:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:50.335 Running I/O for 2 seconds... 00:27:50.335 [2024-10-17 19:29:59.377847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166fef90 00:27:50.335 [2024-10-17 19:29:59.380495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.335 [2024-10-17 19:29:59.380576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.335 [2024-10-17 19:29:59.394984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166feb58 00:27:50.335 [2024-10-17 19:29:59.397566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.335 [2024-10-17 19:29:59.397637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:50.335 [2024-10-17 19:29:59.411999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166fe2e8 00:27:50.335 [2024-10-17 19:29:59.414581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.335 [2024-10-17 19:29:59.414619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:50.335 [2024-10-17 19:29:59.428527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166fda78 00:27:50.335 [2024-10-17 19:29:59.431036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.335 [2024-10-17 19:29:59.431071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:50.335 [2024-10-17 19:29:59.444996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166fd208 00:27:50.335 [2024-10-17 19:29:59.447495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.335 [2024-10-17 19:29:59.447533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:50.335 [2024-10-17 19:29:59.461434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166fc998 00:27:50.335 [2024-10-17 19:29:59.463905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.335 [2024-10-17 19:29:59.463941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:50.335 [2024-10-17 19:29:59.477912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166fc128 00:27:50.335 [2024-10-17 19:29:59.480381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.335 [2024-10-17 19:29:59.480426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:50.335 [2024-10-17 19:29:59.495557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166fb8b8 00:27:50.335 [2024-10-17 19:29:59.497986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.335 [2024-10-17 19:29:59.498036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:50.335 [2024-10-17 19:29:59.512941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166fb048 00:27:50.335 [2024-10-17 19:29:59.515394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.335 [2024-10-17 19:29:59.515448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:50.335 [2024-10-17 19:29:59.530349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166fa7d8 00:27:50.335 [2024-10-17 19:29:59.532748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.335 [2024-10-17 19:29:59.532791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:50.335 [2024-10-17 19:29:59.547232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f9f68 00:27:50.335 [2024-10-17 19:29:59.549620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.335 [2024-10-17 19:29:59.549660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:50.335 [2024-10-17 19:29:59.564681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f96f8 00:27:50.335 [2024-10-17 19:29:59.567066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.335 [2024-10-17 19:29:59.567111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:50.335 [2024-10-17 19:29:59.581933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f8e88 00:27:50.335 [2024-10-17 19:29:59.584294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.335 [2024-10-17 19:29:59.584341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:50.595 [2024-10-17 19:29:59.598740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f8618 00:27:50.595 [2024-10-17 19:29:59.601073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.595 [2024-10-17 19:29:59.601155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:50.595 [2024-10-17 19:29:59.615865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f7da8 00:27:50.595 [2024-10-17 19:29:59.618207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.595 [2024-10-17 19:29:59.618267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:50.595 [2024-10-17 19:29:59.632884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f7538 00:27:50.595 [2024-10-17 19:29:59.635180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.595 [2024-10-17 19:29:59.635224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:50.595 [2024-10-17 19:29:59.650192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f6cc8 00:27:50.595 [2024-10-17 19:29:59.652457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.595 [2024-10-17 19:29:59.652502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.595 [2024-10-17 19:29:59.667796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f6458 00:27:50.595 [2024-10-17 19:29:59.670078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.595 [2024-10-17 19:29:59.670159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:50.595 [2024-10-17 19:29:59.686507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f5be8 00:27:50.595 [2024-10-17 19:29:59.688743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.595 [2024-10-17 19:29:59.688822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:50.595 [2024-10-17 19:29:59.705172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f5378 00:27:50.595 [2024-10-17 19:29:59.707404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.595 [2024-10-17 19:29:59.707459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:50.595 [2024-10-17 19:29:59.723718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f4b08 00:27:50.595 [2024-10-17 19:29:59.725932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.595 [2024-10-17 19:29:59.725991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:50.595 [2024-10-17 19:29:59.741494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f4298 00:27:50.595 [2024-10-17 19:29:59.743708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.595 [2024-10-17 19:29:59.743768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:50.595 [2024-10-17 19:29:59.759904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f3a28 00:27:50.595 [2024-10-17 19:29:59.762064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.595 [2024-10-17 19:29:59.762119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:50.595 [2024-10-17 19:29:59.778266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f31b8 00:27:50.595 [2024-10-17 19:29:59.780397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.595 [2024-10-17 19:29:59.780451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:50.595 [2024-10-17 19:29:59.796534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f2948 00:27:50.595 [2024-10-17 19:29:59.798672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.595 [2024-10-17 19:29:59.798725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:50.595 [2024-10-17 19:29:59.814776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f20d8 00:27:50.595 [2024-10-17 19:29:59.816892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.595 [2024-10-17 19:29:59.816942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:50.595 [2024-10-17 19:29:59.833033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f1868 00:27:50.595 [2024-10-17 19:29:59.835143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.595 [2024-10-17 19:29:59.835192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:50.595 [2024-10-17 19:29:59.850292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f0ff8 00:27:50.595 [2024-10-17 19:29:59.852340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.595 [2024-10-17 19:29:59.852377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:50.855 [2024-10-17 19:29:59.868122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f0788 00:27:50.855 [2024-10-17 19:29:59.870187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.855 [2024-10-17 19:29:59.870250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:50.855 [2024-10-17 19:29:59.886333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166eff18 00:27:50.855 [2024-10-17 19:29:59.888366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.855 [2024-10-17 19:29:59.888416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:50.855 [2024-10-17 19:29:59.904772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166ef6a8 00:27:50.855 [2024-10-17 19:29:59.906792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.855 [2024-10-17 19:29:59.906842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:50.855 [2024-10-17 19:29:59.922985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166eee38 00:27:50.855 [2024-10-17 19:29:59.924990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.855 [2024-10-17 19:29:59.925050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:50.855 [2024-10-17 19:29:59.941233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166ee5c8 00:27:50.855 [2024-10-17 19:29:59.943236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.855 [2024-10-17 19:29:59.943287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.855 [2024-10-17 19:29:59.959561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166edd58 00:27:50.855 [2024-10-17 19:29:59.961514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.855 [2024-10-17 19:29:59.961569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:50.855 [2024-10-17 19:29:59.977754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166ed4e8 00:27:50.855 [2024-10-17 19:29:59.979704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.856 [2024-10-17 19:29:59.979759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:50.856 [2024-10-17 19:29:59.995993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166ecc78 00:27:50.856 [2024-10-17 19:29:59.997893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.856 [2024-10-17 19:29:59.997943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:50.856 [2024-10-17 19:30:00.014313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166ec408 00:27:50.856 [2024-10-17 19:30:00.016194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.856 [2024-10-17 19:30:00.016239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:50.856 [2024-10-17 19:30:00.032578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166ebb98 00:27:50.856 [2024-10-17 19:30:00.034490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.856 [2024-10-17 19:30:00.034533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:50.856 [2024-10-17 19:30:00.052106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166eb328 00:27:50.856 [2024-10-17 19:30:00.054058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.856 [2024-10-17 19:30:00.054114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:50.856 [2024-10-17 19:30:00.070557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166eaab8 00:27:50.856 [2024-10-17 19:30:00.072398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.856 [2024-10-17 19:30:00.072440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:50.856 [2024-10-17 19:30:00.088918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166ea248 00:27:50.856 [2024-10-17 19:30:00.090745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.856 [2024-10-17 19:30:00.090787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:50.856 [2024-10-17 19:30:00.107467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e99d8 00:27:50.856 [2024-10-17 19:30:00.109355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.856 [2024-10-17 19:30:00.109399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:51.115 [2024-10-17 19:30:00.124719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e9168 00:27:51.115 [2024-10-17 19:30:00.126497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.115 [2024-10-17 19:30:00.126537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:51.115 [2024-10-17 19:30:00.141557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e88f8 00:27:51.115 [2024-10-17 19:30:00.143299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.115 [2024-10-17 19:30:00.143354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:51.115 [2024-10-17 19:30:00.158383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e8088 00:27:51.115 [2024-10-17 19:30:00.160078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.115 [2024-10-17 19:30:00.160114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:51.115 [2024-10-17 19:30:00.175012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e7818 00:27:51.115 [2024-10-17 19:30:00.176787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.115 [2024-10-17 19:30:00.176832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:51.115 [2024-10-17 19:30:00.191827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e6fa8 00:27:51.115 [2024-10-17 19:30:00.193597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.115 [2024-10-17 19:30:00.193636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:51.115 [2024-10-17 19:30:00.208729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e6738 00:27:51.115 [2024-10-17 19:30:00.210406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.115 [2024-10-17 19:30:00.210462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:51.115 [2024-10-17 19:30:00.225557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e5ec8 00:27:51.115 [2024-10-17 19:30:00.227205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.115 [2024-10-17 19:30:00.227262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:51.115 [2024-10-17 19:30:00.242483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e5658 00:27:51.115 [2024-10-17 19:30:00.244212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.115 [2024-10-17 19:30:00.244269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:51.115 [2024-10-17 19:30:00.259458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e4de8 00:27:51.115 [2024-10-17 19:30:00.261111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.115 [2024-10-17 19:30:00.261176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:51.115 [2024-10-17 19:30:00.276336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e4578 00:27:51.115 [2024-10-17 19:30:00.277900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.115 [2024-10-17 19:30:00.277951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:51.115 [2024-10-17 19:30:00.293004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e3d08 00:27:51.115 [2024-10-17 19:30:00.294571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.115 [2024-10-17 19:30:00.294606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:51.115 [2024-10-17 19:30:00.309812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e3498 00:27:51.115 [2024-10-17 19:30:00.311454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.115 [2024-10-17 19:30:00.311489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:51.115 [2024-10-17 19:30:00.326728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e2c28 00:27:51.115 [2024-10-17 19:30:00.328328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.115 [2024-10-17 19:30:00.328363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:51.115 [2024-10-17 19:30:00.343454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e23b8 00:27:51.115 [2024-10-17 19:30:00.344922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.115 [2024-10-17 19:30:00.344955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:51.115 14296.00 IOPS, 55.84 MiB/s [2024-10-17T19:30:00.373Z] [2024-10-17 19:30:00.364483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e1b48 00:27:51.115 [2024-10-17 19:30:00.366734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.115 [2024-10-17 19:30:00.366776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:51.378 [2024-10-17 19:30:00.386633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e12d8 00:27:51.378 [2024-10-17 19:30:00.388855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.378 [2024-10-17 19:30:00.388893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:51.378 [2024-10-17 19:30:00.406734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e0a68 00:27:51.378 [2024-10-17 19:30:00.408419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.378 [2024-10-17 19:30:00.408474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:51.378 [2024-10-17 19:30:00.427365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e01f8 00:27:51.378 [2024-10-17 19:30:00.429349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.378 [2024-10-17 19:30:00.429413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:51.378 [2024-10-17 19:30:00.447078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166df988 00:27:51.378 [2024-10-17 19:30:00.448589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.378 [2024-10-17 19:30:00.448656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:51.378 [2024-10-17 19:30:00.464486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166df118 00:27:51.378 [2024-10-17 19:30:00.465836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.378 [2024-10-17 19:30:00.465878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:51.378 [2024-10-17 19:30:00.481852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166de8a8 00:27:51.378 [2024-10-17 19:30:00.483324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.378 [2024-10-17 19:30:00.483369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:51.378 [2024-10-17 19:30:00.498771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166de038 00:27:51.379 [2024-10-17 19:30:00.500114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.379 [2024-10-17 19:30:00.500175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:51.379 [2024-10-17 19:30:00.522388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166de038 00:27:51.379 [2024-10-17 19:30:00.524949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.379 [2024-10-17 19:30:00.525003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.379 [2024-10-17 19:30:00.538968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166de8a8 00:27:51.379 [2024-10-17 19:30:00.541503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.379 [2024-10-17 19:30:00.541546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:51.379 [2024-10-17 19:30:00.555421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166df118 00:27:51.379 [2024-10-17 19:30:00.557926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.379 [2024-10-17 19:30:00.557975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:51.379 [2024-10-17 19:30:00.571891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166df988 00:27:51.379 [2024-10-17 19:30:00.574412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.379 [2024-10-17 19:30:00.574466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:51.379 [2024-10-17 19:30:00.588436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e01f8 00:27:51.379 [2024-10-17 19:30:00.591025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.379 [2024-10-17 19:30:00.591078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:51.379 [2024-10-17 19:30:00.605165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e0a68 00:27:51.379 [2024-10-17 19:30:00.607614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.379 [2024-10-17 19:30:00.607656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:51.379 [2024-10-17 19:30:00.621645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e12d8 00:27:51.379 [2024-10-17 19:30:00.624088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.379 [2024-10-17 19:30:00.624150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:51.654 [2024-10-17 19:30:00.638122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e1b48 00:27:51.654 [2024-10-17 19:30:00.640554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.654 [2024-10-17 19:30:00.640597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:51.654 [2024-10-17 19:30:00.654783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e23b8 00:27:51.654 [2024-10-17 19:30:00.657213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.654 [2024-10-17 19:30:00.657267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:51.654 [2024-10-17 19:30:00.671362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e2c28 00:27:51.654 [2024-10-17 19:30:00.673775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.654 [2024-10-17 19:30:00.673829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:51.654 [2024-10-17 19:30:00.688099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e3498 00:27:51.654 [2024-10-17 19:30:00.690477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.654 [2024-10-17 19:30:00.690535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:51.654 [2024-10-17 19:30:00.704942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e3d08 00:27:51.654 [2024-10-17 19:30:00.707305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.654 [2024-10-17 19:30:00.707357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:51.654 [2024-10-17 19:30:00.721528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e4578 00:27:51.654 [2024-10-17 19:30:00.723869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.654 [2024-10-17 19:30:00.723929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:51.654 [2024-10-17 19:30:00.738329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e4de8 00:27:51.654 [2024-10-17 19:30:00.740630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.654 [2024-10-17 19:30:00.740688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:51.654 [2024-10-17 19:30:00.754813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e5658 00:27:51.654 [2024-10-17 19:30:00.757088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.654 [2024-10-17 19:30:00.757127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:51.654 [2024-10-17 19:30:00.771202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e5ec8 00:27:51.654 [2024-10-17 19:30:00.773450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.654 [2024-10-17 19:30:00.773489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.654 [2024-10-17 19:30:00.787600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e6738 00:27:51.654 [2024-10-17 19:30:00.789915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.654 [2024-10-17 19:30:00.789961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:51.654 [2024-10-17 19:30:00.804274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e6fa8 00:27:51.654 [2024-10-17 19:30:00.806491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.654 [2024-10-17 19:30:00.806549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:51.654 [2024-10-17 19:30:00.820854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e7818 00:27:51.654 [2024-10-17 19:30:00.823058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.654 [2024-10-17 19:30:00.823115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:51.654 [2024-10-17 19:30:00.837533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e8088 00:27:51.654 [2024-10-17 19:30:00.839721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.654 [2024-10-17 19:30:00.839776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:51.654 [2024-10-17 19:30:00.854183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e88f8 00:27:51.654 [2024-10-17 19:30:00.856341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.654 [2024-10-17 19:30:00.856401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:51.654 [2024-10-17 19:30:00.870972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e9168 00:27:51.654 [2024-10-17 19:30:00.873121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.654 [2024-10-17 19:30:00.873184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:51.654 [2024-10-17 19:30:00.887471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166e99d8 00:27:51.654 [2024-10-17 19:30:00.889612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.654 [2024-10-17 19:30:00.889656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:51.654 [2024-10-17 19:30:00.903914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166ea248 00:27:51.654 [2024-10-17 19:30:00.906038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.654 [2024-10-17 19:30:00.906077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:51.913 [2024-10-17 19:30:00.920367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166eaab8 00:27:51.913 [2024-10-17 19:30:00.922458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.913 [2024-10-17 19:30:00.922498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:51.913 [2024-10-17 19:30:00.936819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166eb328 00:27:51.913 [2024-10-17 19:30:00.938905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.913 [2024-10-17 19:30:00.938945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:51.913 [2024-10-17 19:30:00.953320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166ebb98 00:27:51.913 [2024-10-17 19:30:00.955394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.913 [2024-10-17 19:30:00.955429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:51.913 [2024-10-17 19:30:00.969719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166ec408 00:27:51.913 [2024-10-17 19:30:00.971748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.914 [2024-10-17 19:30:00.971780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:51.914 [2024-10-17 19:30:00.986084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166ecc78 00:27:51.914 [2024-10-17 19:30:00.988083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.914 [2024-10-17 19:30:00.988125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:51.914 [2024-10-17 19:30:01.002563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166ed4e8 00:27:51.914 [2024-10-17 19:30:01.004544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.914 [2024-10-17 19:30:01.004590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:51.914 [2024-10-17 19:30:01.019005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166edd58 00:27:51.914 [2024-10-17 19:30:01.020983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.914 [2024-10-17 19:30:01.021023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:51.914 [2024-10-17 19:30:01.035596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166ee5c8 00:27:51.914 [2024-10-17 19:30:01.037555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.914 [2024-10-17 19:30:01.037598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.914 [2024-10-17 19:30:01.052147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166eee38 00:27:51.914 [2024-10-17 19:30:01.054063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.914 [2024-10-17 19:30:01.054096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:51.914 [2024-10-17 19:30:01.068593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166ef6a8 00:27:51.914 [2024-10-17 19:30:01.070509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.914 [2024-10-17 19:30:01.070542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:51.914 [2024-10-17 19:30:01.085077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166eff18 00:27:51.914 [2024-10-17 19:30:01.086974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.914 [2024-10-17 19:30:01.087013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:51.914 [2024-10-17 19:30:01.101522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f0788 00:27:51.914 [2024-10-17 19:30:01.103390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.914 [2024-10-17 19:30:01.103424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:51.914 [2024-10-17 19:30:01.117927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f0ff8 00:27:51.914 [2024-10-17 19:30:01.119801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.914 [2024-10-17 19:30:01.119843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:51.914 [2024-10-17 19:30:01.134416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f1868 00:27:51.914 [2024-10-17 19:30:01.136273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.914 [2024-10-17 19:30:01.136309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:51.914 [2024-10-17 19:30:01.150902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f20d8 00:27:51.914 [2024-10-17 19:30:01.152727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.914 [2024-10-17 19:30:01.152761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:51.914 [2024-10-17 19:30:01.167339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f2948 00:27:51.914 [2024-10-17 19:30:01.169117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:51.914 [2024-10-17 19:30:01.169165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:52.173 [2024-10-17 19:30:01.183745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f31b8 00:27:52.173 [2024-10-17 19:30:01.185510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.173 [2024-10-17 19:30:01.185541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:52.173 [2024-10-17 19:30:01.200175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f3a28 00:27:52.173 [2024-10-17 19:30:01.201899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.173 [2024-10-17 19:30:01.201940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:52.173 [2024-10-17 19:30:01.216731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f4298 00:27:52.173 [2024-10-17 19:30:01.218469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.173 [2024-10-17 19:30:01.218504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:52.173 [2024-10-17 19:30:01.233445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f4b08 00:27:52.173 [2024-10-17 19:30:01.235171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.173 [2024-10-17 19:30:01.235209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:52.173 [2024-10-17 19:30:01.249907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f5378 00:27:52.173 [2024-10-17 19:30:01.251626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.173 [2024-10-17 19:30:01.251665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:52.173 [2024-10-17 19:30:01.266349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f5be8 00:27:52.173 [2024-10-17 19:30:01.268055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.173 [2024-10-17 19:30:01.268088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:52.173 [2024-10-17 19:30:01.282810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f6458 00:27:52.173 [2024-10-17 19:30:01.284462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.173 [2024-10-17 19:30:01.284502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:52.173 [2024-10-17 19:30:01.299224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f6cc8 00:27:52.173 [2024-10-17 19:30:01.300856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.173 [2024-10-17 19:30:01.300893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.173 [2024-10-17 19:30:01.315654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f7538 00:27:52.173 [2024-10-17 19:30:01.317268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.173 [2024-10-17 19:30:01.317303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:52.173 [2024-10-17 19:30:01.332092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f7da8 00:27:52.173 [2024-10-17 19:30:01.333678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.174 [2024-10-17 19:30:01.333718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:52.174 [2024-10-17 19:30:01.348499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc230) with pdu=0x2000166f8618 00:27:52.174 [2024-10-17 19:30:01.350060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.174 [2024-10-17 19:30:01.350102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:52.174 14675.00 IOPS, 57.32 MiB/s 00:27:52.174 Latency(us) 00:27:52.174 [2024-10-17T19:30:01.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:52.174 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:52.174 nvme0n1 : 2.01 14685.73 57.37 0.00 0.00 8697.85 4617.31 32410.53 00:27:52.174 [2024-10-17T19:30:01.432Z] =================================================================================================================== 00:27:52.174 [2024-10-17T19:30:01.432Z] Total : 14685.73 57.37 0.00 0.00 8697.85 4617.31 32410.53 00:27:52.174 { 00:27:52.174 "results": [ 00:27:52.174 { 00:27:52.174 "job": "nvme0n1", 00:27:52.174 "core_mask": "0x2", 00:27:52.174 "workload": "randwrite", 00:27:52.174 "status": "finished", 00:27:52.174 "queue_depth": 128, 00:27:52.174 "io_size": 4096, 00:27:52.174 "runtime": 2.008684, 00:27:52.174 "iops": 14685.734540624608, 00:27:52.174 "mibps": 57.366150549314874, 00:27:52.174 "io_failed": 0, 00:27:52.174 "io_timeout": 0, 00:27:52.174 "avg_latency_us": 8697.847523706503, 00:27:52.174 "min_latency_us": 4617.309090909091, 00:27:52.174 "max_latency_us": 32410.53090909091 00:27:52.174 } 00:27:52.174 ], 00:27:52.174 "core_count": 1 00:27:52.174 } 00:27:52.174 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:52.174 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:52.174 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:52.174 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:52.174 | .driver_specific 00:27:52.174 | .nvme_error 00:27:52.174 | .status_code 00:27:52.174 | .command_transient_transport_error' 00:27:52.740 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 115 > 0 )) 00:27:52.740 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80556 00:27:52.740 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 80556 ']' 00:27:52.740 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 80556 00:27:52.740 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:52.740 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:52.740 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80556 00:27:52.740 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:52.740 killing process with pid 80556 00:27:52.740 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:52.740 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80556' 00:27:52.740 Received shutdown signal, test time was about 2.000000 seconds 00:27:52.740 00:27:52.740 Latency(us) 00:27:52.740 [2024-10-17T19:30:01.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:52.740 [2024-10-17T19:30:01.998Z] =================================================================================================================== 00:27:52.740 [2024-10-17T19:30:01.998Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:52.740 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 80556 00:27:52.740 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 80556 00:27:52.740 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:52.740 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:52.740 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:52.740 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:52.740 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:52.740 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80603 00:27:52.740 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:52.740 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80603 /var/tmp/bperf.sock 00:27:52.740 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 80603 ']' 00:27:52.740 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:52.740 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:52.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:52.740 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:52.740 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:52.740 19:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:52.740 [2024-10-17 19:30:01.991411] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:27:52.740 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:52.740 Zero copy mechanism will not be used. 00:27:52.740 [2024-10-17 19:30:01.991557] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80603 ] 00:27:52.998 [2024-10-17 19:30:02.135354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.998 [2024-10-17 19:30:02.206732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.256 [2024-10-17 19:30:02.264366] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:53.256 19:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:53.256 19:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:53.256 19:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:53.256 19:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:53.515 19:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:53.515 19:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.515 19:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:53.515 19:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.515 19:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:53.515 19:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:53.774 nvme0n1 00:27:53.774 19:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:53.774 19:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.774 19:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:53.774 19:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.774 19:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:53.774 19:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:54.033 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:54.033 Zero copy mechanism will not be used. 00:27:54.033 Running I/O for 2 seconds... 00:27:54.033 [2024-10-17 19:30:03.090794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.033 [2024-10-17 19:30:03.091142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.033 [2024-10-17 19:30:03.091183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.033 [2024-10-17 19:30:03.097167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.033 [2024-10-17 19:30:03.097480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.033 [2024-10-17 19:30:03.097515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.033 [2024-10-17 19:30:03.103449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.033 [2024-10-17 19:30:03.103775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.033 [2024-10-17 19:30:03.103809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.033 [2024-10-17 19:30:03.109762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.033 [2024-10-17 19:30:03.110095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.033 [2024-10-17 19:30:03.110140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.033 [2024-10-17 19:30:03.116146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.033 [2024-10-17 19:30:03.116476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.033 [2024-10-17 19:30:03.116512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.033 [2024-10-17 19:30:03.122685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.033 [2024-10-17 19:30:03.123003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.033 [2024-10-17 19:30:03.123041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.033 [2024-10-17 19:30:03.129257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.033 [2024-10-17 19:30:03.129585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.033 [2024-10-17 19:30:03.129630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.033 [2024-10-17 19:30:03.136027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.033 [2024-10-17 19:30:03.136375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.033 [2024-10-17 19:30:03.136413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.033 [2024-10-17 19:30:03.142715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.033 [2024-10-17 19:30:03.143041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.033 [2024-10-17 19:30:03.143083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.033 [2024-10-17 19:30:03.149254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.033 [2024-10-17 19:30:03.149567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.033 [2024-10-17 19:30:03.149601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.033 [2024-10-17 19:30:03.155592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.033 [2024-10-17 19:30:03.155904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.033 [2024-10-17 19:30:03.155934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.033 [2024-10-17 19:30:03.161853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.033 [2024-10-17 19:30:03.162188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.033 [2024-10-17 19:30:03.162218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.033 [2024-10-17 19:30:03.168212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.033 [2024-10-17 19:30:03.168531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.033 [2024-10-17 19:30:03.168567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.033 [2024-10-17 19:30:03.174545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.033 [2024-10-17 19:30:03.174855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.033 [2024-10-17 19:30:03.174886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.033 [2024-10-17 19:30:03.180831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.033 [2024-10-17 19:30:03.181152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.033 [2024-10-17 19:30:03.181182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.033 [2024-10-17 19:30:03.187113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.034 [2024-10-17 19:30:03.187439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.034 [2024-10-17 19:30:03.187471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.034 [2024-10-17 19:30:03.193367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.034 [2024-10-17 19:30:03.193687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.034 [2024-10-17 19:30:03.193721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.034 [2024-10-17 19:30:03.199635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.034 [2024-10-17 19:30:03.199947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.034 [2024-10-17 19:30:03.199978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.034 [2024-10-17 19:30:03.205867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.034 [2024-10-17 19:30:03.206220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.034 [2024-10-17 19:30:03.206250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.034 [2024-10-17 19:30:03.212030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.034 [2024-10-17 19:30:03.212363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.034 [2024-10-17 19:30:03.212393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.034 [2024-10-17 19:30:03.218308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.034 [2024-10-17 19:30:03.218629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.034 [2024-10-17 19:30:03.218659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.034 [2024-10-17 19:30:03.224607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.034 [2024-10-17 19:30:03.224924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.034 [2024-10-17 19:30:03.224967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.034 [2024-10-17 19:30:03.230864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.034 [2024-10-17 19:30:03.231188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.034 [2024-10-17 19:30:03.231220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.034 [2024-10-17 19:30:03.237029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.034 [2024-10-17 19:30:03.237364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.034 [2024-10-17 19:30:03.237397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.034 [2024-10-17 19:30:03.243402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.034 [2024-10-17 19:30:03.243708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.034 [2024-10-17 19:30:03.243741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.034 [2024-10-17 19:30:03.249643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.034 [2024-10-17 19:30:03.249948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.034 [2024-10-17 19:30:03.249978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.034 [2024-10-17 19:30:03.255880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.034 [2024-10-17 19:30:03.256210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.034 [2024-10-17 19:30:03.256240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.034 [2024-10-17 19:30:03.262165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.034 [2024-10-17 19:30:03.262475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.034 [2024-10-17 19:30:03.262504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.034 [2024-10-17 19:30:03.268382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.034 [2024-10-17 19:30:03.268688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.034 [2024-10-17 19:30:03.268719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.034 [2024-10-17 19:30:03.274636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.034 [2024-10-17 19:30:03.274950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.034 [2024-10-17 19:30:03.274982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.034 [2024-10-17 19:30:03.280780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.034 [2024-10-17 19:30:03.281085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.034 [2024-10-17 19:30:03.281118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.034 [2024-10-17 19:30:03.287148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.034 [2024-10-17 19:30:03.287459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.034 [2024-10-17 19:30:03.287490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.293 [2024-10-17 19:30:03.293429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.293 [2024-10-17 19:30:03.293745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.293 [2024-10-17 19:30:03.293778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.293 [2024-10-17 19:30:03.299391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.293 [2024-10-17 19:30:03.299467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.293 [2024-10-17 19:30:03.299494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.293 [2024-10-17 19:30:03.305591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.293 [2024-10-17 19:30:03.305677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.293 [2024-10-17 19:30:03.305702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.293 [2024-10-17 19:30:03.311723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.293 [2024-10-17 19:30:03.311804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.293 [2024-10-17 19:30:03.311832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.293 [2024-10-17 19:30:03.317909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.293 [2024-10-17 19:30:03.318011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.293 [2024-10-17 19:30:03.318038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.293 [2024-10-17 19:30:03.324122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.293 [2024-10-17 19:30:03.324228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.293 [2024-10-17 19:30:03.324262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.293 [2024-10-17 19:30:03.330491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.293 [2024-10-17 19:30:03.330583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.293 [2024-10-17 19:30:03.330612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.293 [2024-10-17 19:30:03.336817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.293 [2024-10-17 19:30:03.336935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.293 [2024-10-17 19:30:03.336960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.293 [2024-10-17 19:30:03.343044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.293 [2024-10-17 19:30:03.343140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.293 [2024-10-17 19:30:03.343167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.293 [2024-10-17 19:30:03.349303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.293 [2024-10-17 19:30:03.349382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.293 [2024-10-17 19:30:03.349412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.293 [2024-10-17 19:30:03.355494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.293 [2024-10-17 19:30:03.355574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.293 [2024-10-17 19:30:03.355607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.293 [2024-10-17 19:30:03.361667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.294 [2024-10-17 19:30:03.361750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.294 [2024-10-17 19:30:03.361788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.294 [2024-10-17 19:30:03.367870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.294 [2024-10-17 19:30:03.367950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.294 [2024-10-17 19:30:03.367979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.294 [2024-10-17 19:30:03.373997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.294 [2024-10-17 19:30:03.374090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.294 [2024-10-17 19:30:03.374115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.294 [2024-10-17 19:30:03.380227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.294 [2024-10-17 19:30:03.380310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.294 [2024-10-17 19:30:03.380335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.294 [2024-10-17 19:30:03.386545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.294 [2024-10-17 19:30:03.386632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.294 [2024-10-17 19:30:03.386657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.294 [2024-10-17 19:30:03.392797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.294 [2024-10-17 19:30:03.392890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.294 [2024-10-17 19:30:03.392915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.294 [2024-10-17 19:30:03.399080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.294 [2024-10-17 19:30:03.399191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.294 [2024-10-17 19:30:03.399220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.294 [2024-10-17 19:30:03.405399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.294 [2024-10-17 19:30:03.405501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.294 [2024-10-17 19:30:03.405531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.294 [2024-10-17 19:30:03.411611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.294 [2024-10-17 19:30:03.411702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.294 [2024-10-17 19:30:03.411733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.294 [2024-10-17 19:30:03.417759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.294 [2024-10-17 19:30:03.417843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.294 [2024-10-17 19:30:03.417869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.294 [2024-10-17 19:30:03.423965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.294 [2024-10-17 19:30:03.424044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.294 [2024-10-17 19:30:03.424070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.294 [2024-10-17 19:30:03.430187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.294 [2024-10-17 19:30:03.430270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.294 [2024-10-17 19:30:03.430298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.294 [2024-10-17 19:30:03.436327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.294 [2024-10-17 19:30:03.436407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.294 [2024-10-17 19:30:03.436434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.294 [2024-10-17 19:30:03.442528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.294 [2024-10-17 19:30:03.442613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.294 [2024-10-17 19:30:03.442639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.294 [2024-10-17 19:30:03.448728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.294 [2024-10-17 19:30:03.448811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.294 [2024-10-17 19:30:03.448839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.294 [2024-10-17 19:30:03.454919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.294 [2024-10-17 19:30:03.454998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.294 [2024-10-17 19:30:03.455025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.294 [2024-10-17 19:30:03.461106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.294 [2024-10-17 19:30:03.461194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.294 [2024-10-17 19:30:03.461221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.294 [2024-10-17 19:30:03.468032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.294 [2024-10-17 19:30:03.468125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.294 [2024-10-17 19:30:03.468162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.294 [2024-10-17 19:30:03.475198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.294 [2024-10-17 19:30:03.475284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.294 [2024-10-17 19:30:03.475310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.294 [2024-10-17 19:30:03.481268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.294 [2024-10-17 19:30:03.481355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.294 [2024-10-17 19:30:03.481390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.294 [2024-10-17 19:30:03.487695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.294 [2024-10-17 19:30:03.487774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.294 [2024-10-17 19:30:03.487808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.294 [2024-10-17 19:30:03.493892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.294 [2024-10-17 19:30:03.493971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.294 [2024-10-17 19:30:03.494023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.294 [2024-10-17 19:30:03.500077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.294 [2024-10-17 19:30:03.500168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.294 [2024-10-17 19:30:03.500196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.294 [2024-10-17 19:30:03.506271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.294 [2024-10-17 19:30:03.506358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.294 [2024-10-17 19:30:03.506386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.294 [2024-10-17 19:30:03.512420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.294 [2024-10-17 19:30:03.512493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.294 [2024-10-17 19:30:03.512519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.294 [2024-10-17 19:30:03.518512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.294 [2024-10-17 19:30:03.518585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.294 [2024-10-17 19:30:03.518612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.294 [2024-10-17 19:30:03.524615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.294 [2024-10-17 19:30:03.524688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.294 [2024-10-17 19:30:03.524715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.294 [2024-10-17 19:30:03.530830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.294 [2024-10-17 19:30:03.530908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.294 [2024-10-17 19:30:03.530936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.294 [2024-10-17 19:30:03.537122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.294 [2024-10-17 19:30:03.537215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.294 [2024-10-17 19:30:03.537243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.294 [2024-10-17 19:30:03.543390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.295 [2024-10-17 19:30:03.543467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.295 [2024-10-17 19:30:03.543493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.295 [2024-10-17 19:30:03.549593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.295 [2024-10-17 19:30:03.549668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.295 [2024-10-17 19:30:03.549695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.555 [2024-10-17 19:30:03.555762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.555 [2024-10-17 19:30:03.555834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.555 [2024-10-17 19:30:03.555864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.555 [2024-10-17 19:30:03.562117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.555 [2024-10-17 19:30:03.562224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.555 [2024-10-17 19:30:03.562257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.555 [2024-10-17 19:30:03.568395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.555 [2024-10-17 19:30:03.568484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.555 [2024-10-17 19:30:03.568522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.555 [2024-10-17 19:30:03.574643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.555 [2024-10-17 19:30:03.574729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.555 [2024-10-17 19:30:03.574771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.555 [2024-10-17 19:30:03.581114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.555 [2024-10-17 19:30:03.581212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.555 [2024-10-17 19:30:03.581245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.555 [2024-10-17 19:30:03.587327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.555 [2024-10-17 19:30:03.587412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.555 [2024-10-17 19:30:03.587445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.555 [2024-10-17 19:30:03.593430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.555 [2024-10-17 19:30:03.593513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.555 [2024-10-17 19:30:03.593542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.555 [2024-10-17 19:30:03.599725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.555 [2024-10-17 19:30:03.599805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.555 [2024-10-17 19:30:03.599831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.555 [2024-10-17 19:30:03.606065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.555 [2024-10-17 19:30:03.606169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.555 [2024-10-17 19:30:03.606196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.555 [2024-10-17 19:30:03.612396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.555 [2024-10-17 19:30:03.612476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.555 [2024-10-17 19:30:03.612503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.555 [2024-10-17 19:30:03.618514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.555 [2024-10-17 19:30:03.618594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.555 [2024-10-17 19:30:03.618621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.555 [2024-10-17 19:30:03.624660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.555 [2024-10-17 19:30:03.624742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.555 [2024-10-17 19:30:03.624768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.555 [2024-10-17 19:30:03.630856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.555 [2024-10-17 19:30:03.630936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.555 [2024-10-17 19:30:03.630961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.555 [2024-10-17 19:30:03.637123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.555 [2024-10-17 19:30:03.637223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.555 [2024-10-17 19:30:03.637249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.555 [2024-10-17 19:30:03.643209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.555 [2024-10-17 19:30:03.643288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.555 [2024-10-17 19:30:03.643317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.555 [2024-10-17 19:30:03.649360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.555 [2024-10-17 19:30:03.649441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.555 [2024-10-17 19:30:03.649470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.555 [2024-10-17 19:30:03.655530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.555 [2024-10-17 19:30:03.655612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.555 [2024-10-17 19:30:03.655641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.555 [2024-10-17 19:30:03.661842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.555 [2024-10-17 19:30:03.661930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.555 [2024-10-17 19:30:03.661959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.555 [2024-10-17 19:30:03.668070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.555 [2024-10-17 19:30:03.668163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.555 [2024-10-17 19:30:03.668192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.555 [2024-10-17 19:30:03.674155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.555 [2024-10-17 19:30:03.674232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.555 [2024-10-17 19:30:03.674259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.555 [2024-10-17 19:30:03.680302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.555 [2024-10-17 19:30:03.680381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.555 [2024-10-17 19:30:03.680406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.555 [2024-10-17 19:30:03.686367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.555 [2024-10-17 19:30:03.686443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.555 [2024-10-17 19:30:03.686467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.555 [2024-10-17 19:30:03.692666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.555 [2024-10-17 19:30:03.692757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.555 [2024-10-17 19:30:03.692784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.555 [2024-10-17 19:30:03.699138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.555 [2024-10-17 19:30:03.699245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.556 [2024-10-17 19:30:03.699275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.556 [2024-10-17 19:30:03.705665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.556 [2024-10-17 19:30:03.705763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.556 [2024-10-17 19:30:03.705791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.556 [2024-10-17 19:30:03.712415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.556 [2024-10-17 19:30:03.712513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.556 [2024-10-17 19:30:03.712548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.556 [2024-10-17 19:30:03.718657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.556 [2024-10-17 19:30:03.718735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.556 [2024-10-17 19:30:03.718764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.556 [2024-10-17 19:30:03.724972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.556 [2024-10-17 19:30:03.725048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.556 [2024-10-17 19:30:03.725076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.556 [2024-10-17 19:30:03.731627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.556 [2024-10-17 19:30:03.731721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.556 [2024-10-17 19:30:03.731750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.556 [2024-10-17 19:30:03.738255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.556 [2024-10-17 19:30:03.738332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.556 [2024-10-17 19:30:03.738362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.556 [2024-10-17 19:30:03.744595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.556 [2024-10-17 19:30:03.744700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.556 [2024-10-17 19:30:03.744733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.556 [2024-10-17 19:30:03.751042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.556 [2024-10-17 19:30:03.751122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.556 [2024-10-17 19:30:03.751179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.556 [2024-10-17 19:30:03.757677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.556 [2024-10-17 19:30:03.757759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.556 [2024-10-17 19:30:03.757800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.556 [2024-10-17 19:30:03.764223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.556 [2024-10-17 19:30:03.764322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.556 [2024-10-17 19:30:03.764360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.556 [2024-10-17 19:30:03.770551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.556 [2024-10-17 19:30:03.770664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.556 [2024-10-17 19:30:03.770701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.556 [2024-10-17 19:30:03.777004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.556 [2024-10-17 19:30:03.777086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.556 [2024-10-17 19:30:03.777125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.556 [2024-10-17 19:30:03.783611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.556 [2024-10-17 19:30:03.783706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.556 [2024-10-17 19:30:03.783745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.556 [2024-10-17 19:30:03.790124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.556 [2024-10-17 19:30:03.790221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.556 [2024-10-17 19:30:03.790267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.556 [2024-10-17 19:30:03.796661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.556 [2024-10-17 19:30:03.796750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.556 [2024-10-17 19:30:03.796791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.556 [2024-10-17 19:30:03.803259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.556 [2024-10-17 19:30:03.803399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.556 [2024-10-17 19:30:03.803438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.556 [2024-10-17 19:30:03.809980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.556 [2024-10-17 19:30:03.810066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.556 [2024-10-17 19:30:03.810106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.833 [2024-10-17 19:30:03.816523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.833 [2024-10-17 19:30:03.816607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.833 [2024-10-17 19:30:03.816651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.833 [2024-10-17 19:30:03.823128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.833 [2024-10-17 19:30:03.823225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.833 [2024-10-17 19:30:03.823269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.833 [2024-10-17 19:30:03.829735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.833 [2024-10-17 19:30:03.829827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.833 [2024-10-17 19:30:03.829871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.833 [2024-10-17 19:30:03.836278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.833 [2024-10-17 19:30:03.836354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.833 [2024-10-17 19:30:03.836392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.833 [2024-10-17 19:30:03.842993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.833 [2024-10-17 19:30:03.843073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.833 [2024-10-17 19:30:03.843114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.833 [2024-10-17 19:30:03.849649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.833 [2024-10-17 19:30:03.849746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.833 [2024-10-17 19:30:03.849782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.833 [2024-10-17 19:30:03.856192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.833 [2024-10-17 19:30:03.856273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.833 [2024-10-17 19:30:03.856314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.833 [2024-10-17 19:30:03.862717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.833 [2024-10-17 19:30:03.862803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.833 [2024-10-17 19:30:03.862848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.833 [2024-10-17 19:30:03.869268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.833 [2024-10-17 19:30:03.869351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.833 [2024-10-17 19:30:03.869389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.833 [2024-10-17 19:30:03.875957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.833 [2024-10-17 19:30:03.876056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.833 [2024-10-17 19:30:03.876092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.833 [2024-10-17 19:30:03.882587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.833 [2024-10-17 19:30:03.882670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.833 [2024-10-17 19:30:03.882707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.833 [2024-10-17 19:30:03.889117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.833 [2024-10-17 19:30:03.889214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.833 [2024-10-17 19:30:03.889244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.834 [2024-10-17 19:30:03.895886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.834 [2024-10-17 19:30:03.895985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.834 [2024-10-17 19:30:03.896016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.834 [2024-10-17 19:30:03.902741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.834 [2024-10-17 19:30:03.902828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.834 [2024-10-17 19:30:03.902877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.834 [2024-10-17 19:30:03.909143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.834 [2024-10-17 19:30:03.909287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.834 [2024-10-17 19:30:03.909339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.834 [2024-10-17 19:30:03.915701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.834 [2024-10-17 19:30:03.915784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.834 [2024-10-17 19:30:03.915836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.834 [2024-10-17 19:30:03.922250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.834 [2024-10-17 19:30:03.922337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.834 [2024-10-17 19:30:03.922387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.834 [2024-10-17 19:30:03.928754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.834 [2024-10-17 19:30:03.928836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.834 [2024-10-17 19:30:03.928886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.834 [2024-10-17 19:30:03.935352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.834 [2024-10-17 19:30:03.935440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.834 [2024-10-17 19:30:03.935493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.834 [2024-10-17 19:30:03.941784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.834 [2024-10-17 19:30:03.941868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.834 [2024-10-17 19:30:03.941919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.834 [2024-10-17 19:30:03.948489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.834 [2024-10-17 19:30:03.948596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.834 [2024-10-17 19:30:03.948646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.834 [2024-10-17 19:30:03.955027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.834 [2024-10-17 19:30:03.955120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.834 [2024-10-17 19:30:03.955185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.834 [2024-10-17 19:30:03.961755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.834 [2024-10-17 19:30:03.961853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.834 [2024-10-17 19:30:03.961903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.834 [2024-10-17 19:30:03.968230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.834 [2024-10-17 19:30:03.968319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.834 [2024-10-17 19:30:03.968369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.834 [2024-10-17 19:30:03.974764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.834 [2024-10-17 19:30:03.974859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.834 [2024-10-17 19:30:03.974906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.834 [2024-10-17 19:30:03.981405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.834 [2024-10-17 19:30:03.981489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.834 [2024-10-17 19:30:03.981538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.834 [2024-10-17 19:30:03.988028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.834 [2024-10-17 19:30:03.988121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.834 [2024-10-17 19:30:03.988187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.834 [2024-10-17 19:30:03.994652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.834 [2024-10-17 19:30:03.994738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.834 [2024-10-17 19:30:03.994779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.834 [2024-10-17 19:30:04.001338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.834 [2024-10-17 19:30:04.001447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.834 [2024-10-17 19:30:04.001494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.834 [2024-10-17 19:30:04.008042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.834 [2024-10-17 19:30:04.008143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.834 [2024-10-17 19:30:04.008188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.834 [2024-10-17 19:30:04.015193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.834 [2024-10-17 19:30:04.015285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.834 [2024-10-17 19:30:04.015334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.834 [2024-10-17 19:30:04.022501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.834 [2024-10-17 19:30:04.022622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.834 [2024-10-17 19:30:04.022660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.834 [2024-10-17 19:30:04.028973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.834 [2024-10-17 19:30:04.029070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.834 [2024-10-17 19:30:04.029097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.834 [2024-10-17 19:30:04.036009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.834 [2024-10-17 19:30:04.036140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.834 [2024-10-17 19:30:04.036176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.834 [2024-10-17 19:30:04.042651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.834 [2024-10-17 19:30:04.042765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.834 [2024-10-17 19:30:04.042799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.834 [2024-10-17 19:30:04.049329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.834 [2024-10-17 19:30:04.049421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.834 [2024-10-17 19:30:04.049457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:54.834 [2024-10-17 19:30:04.056029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.834 [2024-10-17 19:30:04.056111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.834 [2024-10-17 19:30:04.056149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:54.834 [2024-10-17 19:30:04.062509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.834 [2024-10-17 19:30:04.062610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.834 [2024-10-17 19:30:04.062635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:54.834 [2024-10-17 19:30:04.069105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.834 [2024-10-17 19:30:04.069200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.834 [2024-10-17 19:30:04.069226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.834 [2024-10-17 19:30:04.075946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:54.834 [2024-10-17 19:30:04.076047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.834 [2024-10-17 19:30:04.076082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.105 4823.00 IOPS, 602.88 MiB/s [2024-10-17T19:30:04.363Z] [2024-10-17 19:30:04.084698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.105 [2024-10-17 19:30:04.084786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.105 [2024-10-17 19:30:04.084812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.105 [2024-10-17 19:30:04.091293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.105 [2024-10-17 19:30:04.091374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.105 [2024-10-17 19:30:04.091400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.105 [2024-10-17 19:30:04.097839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.105 [2024-10-17 19:30:04.097936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.105 [2024-10-17 19:30:04.097972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.105 [2024-10-17 19:30:04.104589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.105 [2024-10-17 19:30:04.104670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.105 [2024-10-17 19:30:04.104697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.105 [2024-10-17 19:30:04.111235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.105 [2024-10-17 19:30:04.111317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.105 [2024-10-17 19:30:04.111342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.105 [2024-10-17 19:30:04.117873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.105 [2024-10-17 19:30:04.117964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.105 [2024-10-17 19:30:04.117989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.105 [2024-10-17 19:30:04.124337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.105 [2024-10-17 19:30:04.124450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.105 [2024-10-17 19:30:04.124474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.105 [2024-10-17 19:30:04.131028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.105 [2024-10-17 19:30:04.131125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.105 [2024-10-17 19:30:04.131164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.105 [2024-10-17 19:30:04.137714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.105 [2024-10-17 19:30:04.137795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.105 [2024-10-17 19:30:04.137819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.105 [2024-10-17 19:30:04.144417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.105 [2024-10-17 19:30:04.144530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.105 [2024-10-17 19:30:04.144554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.105 [2024-10-17 19:30:04.150875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.105 [2024-10-17 19:30:04.150964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.105 [2024-10-17 19:30:04.150989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.105 [2024-10-17 19:30:04.157422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.105 [2024-10-17 19:30:04.157502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.105 [2024-10-17 19:30:04.157529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.105 [2024-10-17 19:30:04.163976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.105 [2024-10-17 19:30:04.164066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.105 [2024-10-17 19:30:04.164092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.105 [2024-10-17 19:30:04.170406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.105 [2024-10-17 19:30:04.170500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.105 [2024-10-17 19:30:04.170526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.105 [2024-10-17 19:30:04.176722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.105 [2024-10-17 19:30:04.176822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.105 [2024-10-17 19:30:04.176846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.105 [2024-10-17 19:30:04.183307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.105 [2024-10-17 19:30:04.183388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.105 [2024-10-17 19:30:04.183413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.105 [2024-10-17 19:30:04.189762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.105 [2024-10-17 19:30:04.189865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.106 [2024-10-17 19:30:04.189897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.106 [2024-10-17 19:30:04.196237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.106 [2024-10-17 19:30:04.196330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.106 [2024-10-17 19:30:04.196356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.106 [2024-10-17 19:30:04.202787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.106 [2024-10-17 19:30:04.202877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.106 [2024-10-17 19:30:04.202903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.106 [2024-10-17 19:30:04.209421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.106 [2024-10-17 19:30:04.209509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.106 [2024-10-17 19:30:04.209536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.106 [2024-10-17 19:30:04.216187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.106 [2024-10-17 19:30:04.216279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.106 [2024-10-17 19:30:04.216310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.106 [2024-10-17 19:30:04.222815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.106 [2024-10-17 19:30:04.222897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.106 [2024-10-17 19:30:04.222937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.106 [2024-10-17 19:30:04.229389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.106 [2024-10-17 19:30:04.229483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.106 [2024-10-17 19:30:04.229523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.106 [2024-10-17 19:30:04.236169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.106 [2024-10-17 19:30:04.236272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.106 [2024-10-17 19:30:04.236314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.106 [2024-10-17 19:30:04.242863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.106 [2024-10-17 19:30:04.242985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.106 [2024-10-17 19:30:04.243017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.106 [2024-10-17 19:30:04.249700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.106 [2024-10-17 19:30:04.249797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.106 [2024-10-17 19:30:04.249839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.106 [2024-10-17 19:30:04.256856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.106 [2024-10-17 19:30:04.256965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.106 [2024-10-17 19:30:04.257010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.106 [2024-10-17 19:30:04.263856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.106 [2024-10-17 19:30:04.263960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.106 [2024-10-17 19:30:04.264004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.106 [2024-10-17 19:30:04.270813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.106 [2024-10-17 19:30:04.270904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.106 [2024-10-17 19:30:04.270947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.106 [2024-10-17 19:30:04.277799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.106 [2024-10-17 19:30:04.277909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.106 [2024-10-17 19:30:04.277957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.106 [2024-10-17 19:30:04.284988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.106 [2024-10-17 19:30:04.285086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.106 [2024-10-17 19:30:04.285164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.106 [2024-10-17 19:30:04.292060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.106 [2024-10-17 19:30:04.292184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.106 [2024-10-17 19:30:04.292232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.106 [2024-10-17 19:30:04.298976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.106 [2024-10-17 19:30:04.299083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.106 [2024-10-17 19:30:04.299126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.106 [2024-10-17 19:30:04.306114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.106 [2024-10-17 19:30:04.306243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.106 [2024-10-17 19:30:04.306291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.106 [2024-10-17 19:30:04.313525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.106 [2024-10-17 19:30:04.313655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.106 [2024-10-17 19:30:04.313699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.106 [2024-10-17 19:30:04.320404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.106 [2024-10-17 19:30:04.320505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.106 [2024-10-17 19:30:04.320553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.106 [2024-10-17 19:30:04.327472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.106 [2024-10-17 19:30:04.327586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.106 [2024-10-17 19:30:04.327632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.106 [2024-10-17 19:30:04.334490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.106 [2024-10-17 19:30:04.334593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.106 [2024-10-17 19:30:04.334640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.106 [2024-10-17 19:30:04.341493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.106 [2024-10-17 19:30:04.341585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.106 [2024-10-17 19:30:04.341630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.106 [2024-10-17 19:30:04.348377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.106 [2024-10-17 19:30:04.348476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.106 [2024-10-17 19:30:04.348520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.106 [2024-10-17 19:30:04.355456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.106 [2024-10-17 19:30:04.355550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.106 [2024-10-17 19:30:04.355596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.366 [2024-10-17 19:30:04.362335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.366 [2024-10-17 19:30:04.362433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.366 [2024-10-17 19:30:04.362473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.366 [2024-10-17 19:30:04.369270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.366 [2024-10-17 19:30:04.369351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.366 [2024-10-17 19:30:04.369382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.366 [2024-10-17 19:30:04.376030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.366 [2024-10-17 19:30:04.376116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.366 [2024-10-17 19:30:04.376158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.366 [2024-10-17 19:30:04.383014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.366 [2024-10-17 19:30:04.383119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.366 [2024-10-17 19:30:04.383162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.366 [2024-10-17 19:30:04.389809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.366 [2024-10-17 19:30:04.389908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.366 [2024-10-17 19:30:04.389957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.366 [2024-10-17 19:30:04.396555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.366 [2024-10-17 19:30:04.396653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.366 [2024-10-17 19:30:04.396697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.366 [2024-10-17 19:30:04.403246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.366 [2024-10-17 19:30:04.403342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.366 [2024-10-17 19:30:04.403381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.366 [2024-10-17 19:30:04.409934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.366 [2024-10-17 19:30:04.410046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.366 [2024-10-17 19:30:04.410078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.366 [2024-10-17 19:30:04.416547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.366 [2024-10-17 19:30:04.416668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.366 [2024-10-17 19:30:04.416702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.366 [2024-10-17 19:30:04.423174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.366 [2024-10-17 19:30:04.423271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.366 [2024-10-17 19:30:04.423298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.366 [2024-10-17 19:30:04.429942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.366 [2024-10-17 19:30:04.430057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.366 [2024-10-17 19:30:04.430090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.366 [2024-10-17 19:30:04.436678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.366 [2024-10-17 19:30:04.436771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.366 [2024-10-17 19:30:04.436798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.366 [2024-10-17 19:30:04.443399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.366 [2024-10-17 19:30:04.443504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.366 [2024-10-17 19:30:04.443530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.366 [2024-10-17 19:30:04.449896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.366 [2024-10-17 19:30:04.449986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.366 [2024-10-17 19:30:04.450028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.366 [2024-10-17 19:30:04.456472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.366 [2024-10-17 19:30:04.456559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.366 [2024-10-17 19:30:04.456584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.366 [2024-10-17 19:30:04.463007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.366 [2024-10-17 19:30:04.463105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.367 [2024-10-17 19:30:04.463145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.367 [2024-10-17 19:30:04.469666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.367 [2024-10-17 19:30:04.469745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.367 [2024-10-17 19:30:04.469771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.367 [2024-10-17 19:30:04.476301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.367 [2024-10-17 19:30:04.476379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.367 [2024-10-17 19:30:04.476405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.367 [2024-10-17 19:30:04.482778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.367 [2024-10-17 19:30:04.482866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.367 [2024-10-17 19:30:04.482892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.367 [2024-10-17 19:30:04.489545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.367 [2024-10-17 19:30:04.489631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.367 [2024-10-17 19:30:04.489656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.367 [2024-10-17 19:30:04.496055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.367 [2024-10-17 19:30:04.496177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.367 [2024-10-17 19:30:04.496206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.367 [2024-10-17 19:30:04.502753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.367 [2024-10-17 19:30:04.502842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.367 [2024-10-17 19:30:04.502868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.367 [2024-10-17 19:30:04.509243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.367 [2024-10-17 19:30:04.509341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.367 [2024-10-17 19:30:04.509367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.367 [2024-10-17 19:30:04.515854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.367 [2024-10-17 19:30:04.515937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.367 [2024-10-17 19:30:04.515963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.367 [2024-10-17 19:30:04.521961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.367 [2024-10-17 19:30:04.522066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.367 [2024-10-17 19:30:04.522110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.367 [2024-10-17 19:30:04.528346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.367 [2024-10-17 19:30:04.528441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.367 [2024-10-17 19:30:04.528466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.367 [2024-10-17 19:30:04.534630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.367 [2024-10-17 19:30:04.534709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.367 [2024-10-17 19:30:04.534741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.367 [2024-10-17 19:30:04.540639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.367 [2024-10-17 19:30:04.540714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.367 [2024-10-17 19:30:04.540739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.367 [2024-10-17 19:30:04.547040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.367 [2024-10-17 19:30:04.547149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.367 [2024-10-17 19:30:04.547176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.367 [2024-10-17 19:30:04.553800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.367 [2024-10-17 19:30:04.553953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.367 [2024-10-17 19:30:04.553983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.367 [2024-10-17 19:30:04.560511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.367 [2024-10-17 19:30:04.560603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.367 [2024-10-17 19:30:04.560629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.367 [2024-10-17 19:30:04.567037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.367 [2024-10-17 19:30:04.567116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.367 [2024-10-17 19:30:04.567159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.367 [2024-10-17 19:30:04.573080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.367 [2024-10-17 19:30:04.573185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.367 [2024-10-17 19:30:04.573212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.367 [2024-10-17 19:30:04.579319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.367 [2024-10-17 19:30:04.579403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.367 [2024-10-17 19:30:04.579428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.367 [2024-10-17 19:30:04.585451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.367 [2024-10-17 19:30:04.585547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.367 [2024-10-17 19:30:04.585574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.367 [2024-10-17 19:30:04.591764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.367 [2024-10-17 19:30:04.591851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.367 [2024-10-17 19:30:04.591877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.367 [2024-10-17 19:30:04.597963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.367 [2024-10-17 19:30:04.598053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.367 [2024-10-17 19:30:04.598085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.367 [2024-10-17 19:30:04.604109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.367 [2024-10-17 19:30:04.604207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.367 [2024-10-17 19:30:04.604232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.367 [2024-10-17 19:30:04.610354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.367 [2024-10-17 19:30:04.610444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.367 [2024-10-17 19:30:04.610485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.367 [2024-10-17 19:30:04.616644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.367 [2024-10-17 19:30:04.616720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.367 [2024-10-17 19:30:04.616745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.367 [2024-10-17 19:30:04.622778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.367 [2024-10-17 19:30:04.622852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.367 [2024-10-17 19:30:04.622887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.627 [2024-10-17 19:30:04.629052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.627 [2024-10-17 19:30:04.629352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-10-17 19:30:04.629377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.627 [2024-10-17 19:30:04.639983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.627 [2024-10-17 19:30:04.640121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-10-17 19:30:04.640147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.627 [2024-10-17 19:30:04.646163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.627 [2024-10-17 19:30:04.646254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-10-17 19:30:04.646278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.627 [2024-10-17 19:30:04.651123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.627 [2024-10-17 19:30:04.651214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-10-17 19:30:04.651238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.627 [2024-10-17 19:30:04.656188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.627 [2024-10-17 19:30:04.656270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-10-17 19:30:04.656295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.627 [2024-10-17 19:30:04.661199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.627 [2024-10-17 19:30:04.661275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-10-17 19:30:04.661300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.627 [2024-10-17 19:30:04.666423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.627 [2024-10-17 19:30:04.666512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-10-17 19:30:04.666535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.627 [2024-10-17 19:30:04.671904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.627 [2024-10-17 19:30:04.671979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-10-17 19:30:04.672004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.627 [2024-10-17 19:30:04.677252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.627 [2024-10-17 19:30:04.677320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-10-17 19:30:04.677345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.627 [2024-10-17 19:30:04.682418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.627 [2024-10-17 19:30:04.682480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-10-17 19:30:04.682506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.627 [2024-10-17 19:30:04.687803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.627 [2024-10-17 19:30:04.687884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-10-17 19:30:04.687909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.627 [2024-10-17 19:30:04.692832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.628 [2024-10-17 19:30:04.692908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.628 [2024-10-17 19:30:04.692931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.628 [2024-10-17 19:30:04.697883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.628 [2024-10-17 19:30:04.697957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.628 [2024-10-17 19:30:04.697982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.628 [2024-10-17 19:30:04.702913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.628 [2024-10-17 19:30:04.702993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.628 [2024-10-17 19:30:04.703019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.628 [2024-10-17 19:30:04.707936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.628 [2024-10-17 19:30:04.708018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.628 [2024-10-17 19:30:04.708042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.628 [2024-10-17 19:30:04.713029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.628 [2024-10-17 19:30:04.713103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.628 [2024-10-17 19:30:04.713142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.628 [2024-10-17 19:30:04.718355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.628 [2024-10-17 19:30:04.718463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.628 [2024-10-17 19:30:04.718518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.628 [2024-10-17 19:30:04.723747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.628 [2024-10-17 19:30:04.723822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.628 [2024-10-17 19:30:04.723847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.628 [2024-10-17 19:30:04.728916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.628 [2024-10-17 19:30:04.728999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.628 [2024-10-17 19:30:04.729023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.628 [2024-10-17 19:30:04.733971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.628 [2024-10-17 19:30:04.734088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.628 [2024-10-17 19:30:04.734117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.628 [2024-10-17 19:30:04.739020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.628 [2024-10-17 19:30:04.739103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.628 [2024-10-17 19:30:04.739152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.628 [2024-10-17 19:30:04.744208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.628 [2024-10-17 19:30:04.744289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.628 [2024-10-17 19:30:04.744313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.628 [2024-10-17 19:30:04.749688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.628 [2024-10-17 19:30:04.749776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.628 [2024-10-17 19:30:04.749800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.628 [2024-10-17 19:30:04.754840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.628 [2024-10-17 19:30:04.754915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.628 [2024-10-17 19:30:04.754939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.628 [2024-10-17 19:30:04.759927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.628 [2024-10-17 19:30:04.760010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.628 [2024-10-17 19:30:04.760034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.628 [2024-10-17 19:30:04.765155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.628 [2024-10-17 19:30:04.765248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.628 [2024-10-17 19:30:04.765271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.628 [2024-10-17 19:30:04.770404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.628 [2024-10-17 19:30:04.770470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.628 [2024-10-17 19:30:04.770493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.628 [2024-10-17 19:30:04.775511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.628 [2024-10-17 19:30:04.775590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.628 [2024-10-17 19:30:04.775612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.628 [2024-10-17 19:30:04.780984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.628 [2024-10-17 19:30:04.781081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.628 [2024-10-17 19:30:04.781104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.628 [2024-10-17 19:30:04.786119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.628 [2024-10-17 19:30:04.786215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.628 [2024-10-17 19:30:04.786238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.628 [2024-10-17 19:30:04.791480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.628 [2024-10-17 19:30:04.791568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.628 [2024-10-17 19:30:04.791590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.628 [2024-10-17 19:30:04.796647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.628 [2024-10-17 19:30:04.796731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.628 [2024-10-17 19:30:04.796753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.628 [2024-10-17 19:30:04.801773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.628 [2024-10-17 19:30:04.801854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.628 [2024-10-17 19:30:04.801877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.628 [2024-10-17 19:30:04.806912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.628 [2024-10-17 19:30:04.806987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.628 [2024-10-17 19:30:04.807011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.628 [2024-10-17 19:30:04.812145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.628 [2024-10-17 19:30:04.812230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.628 [2024-10-17 19:30:04.812252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.628 [2024-10-17 19:30:04.818210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.628 [2024-10-17 19:30:04.818319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.628 [2024-10-17 19:30:04.818342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.628 [2024-10-17 19:30:04.824255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.628 [2024-10-17 19:30:04.824343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.628 [2024-10-17 19:30:04.824367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.628 [2024-10-17 19:30:04.830153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.628 [2024-10-17 19:30:04.830235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.628 [2024-10-17 19:30:04.830258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.628 [2024-10-17 19:30:04.836095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.628 [2024-10-17 19:30:04.836188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.628 [2024-10-17 19:30:04.836211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.628 [2024-10-17 19:30:04.842077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.628 [2024-10-17 19:30:04.842178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.628 [2024-10-17 19:30:04.842203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.628 [2024-10-17 19:30:04.848115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.629 [2024-10-17 19:30:04.848214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.629 [2024-10-17 19:30:04.848238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.629 [2024-10-17 19:30:04.854301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.629 [2024-10-17 19:30:04.854375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.629 [2024-10-17 19:30:04.854398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.629 [2024-10-17 19:30:04.860536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.629 [2024-10-17 19:30:04.860611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.629 [2024-10-17 19:30:04.860634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.629 [2024-10-17 19:30:04.866691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.629 [2024-10-17 19:30:04.866783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.629 [2024-10-17 19:30:04.866808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.629 [2024-10-17 19:30:04.872730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.629 [2024-10-17 19:30:04.872804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.629 [2024-10-17 19:30:04.872829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.629 [2024-10-17 19:30:04.878847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.629 [2024-10-17 19:30:04.878932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.629 [2024-10-17 19:30:04.878956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.888 [2024-10-17 19:30:04.885038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.888 [2024-10-17 19:30:04.885150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-10-17 19:30:04.885177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.888 [2024-10-17 19:30:04.891050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.888 [2024-10-17 19:30:04.891139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-10-17 19:30:04.891165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.888 [2024-10-17 19:30:04.897116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.888 [2024-10-17 19:30:04.897248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-10-17 19:30:04.897271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.888 [2024-10-17 19:30:04.903312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.888 [2024-10-17 19:30:04.903399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-10-17 19:30:04.903423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.888 [2024-10-17 19:30:04.909373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.888 [2024-10-17 19:30:04.909471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-10-17 19:30:04.909495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.888 [2024-10-17 19:30:04.915456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.888 [2024-10-17 19:30:04.915558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-10-17 19:30:04.915582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.888 [2024-10-17 19:30:04.921619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.889 [2024-10-17 19:30:04.921709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-10-17 19:30:04.921733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.889 [2024-10-17 19:30:04.927578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.889 [2024-10-17 19:30:04.927667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-10-17 19:30:04.927692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.889 [2024-10-17 19:30:04.933591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.889 [2024-10-17 19:30:04.933654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-10-17 19:30:04.933679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.889 [2024-10-17 19:30:04.938666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.889 [2024-10-17 19:30:04.938742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-10-17 19:30:04.938766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.889 [2024-10-17 19:30:04.943648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.889 [2024-10-17 19:30:04.943715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-10-17 19:30:04.943739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.889 [2024-10-17 19:30:04.948713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.889 [2024-10-17 19:30:04.948779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-10-17 19:30:04.948803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.889 [2024-10-17 19:30:04.953842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.889 [2024-10-17 19:30:04.953909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-10-17 19:30:04.953933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.889 [2024-10-17 19:30:04.958890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.889 [2024-10-17 19:30:04.958953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-10-17 19:30:04.958977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.889 [2024-10-17 19:30:04.963819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.889 [2024-10-17 19:30:04.963883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-10-17 19:30:04.963906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.889 [2024-10-17 19:30:04.968912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.889 [2024-10-17 19:30:04.968980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-10-17 19:30:04.969003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.889 [2024-10-17 19:30:04.973914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.889 [2024-10-17 19:30:04.973980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-10-17 19:30:04.974013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.889 [2024-10-17 19:30:04.978968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.889 [2024-10-17 19:30:04.979031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-10-17 19:30:04.979054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.889 [2024-10-17 19:30:04.984207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.889 [2024-10-17 19:30:04.984269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-10-17 19:30:04.984312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.889 [2024-10-17 19:30:04.989632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.889 [2024-10-17 19:30:04.989716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-10-17 19:30:04.989741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.889 [2024-10-17 19:30:04.994950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.889 [2024-10-17 19:30:04.995018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-10-17 19:30:04.995042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.889 [2024-10-17 19:30:05.000010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.889 [2024-10-17 19:30:05.000079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-10-17 19:30:05.000102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.889 [2024-10-17 19:30:05.005169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.889 [2024-10-17 19:30:05.005246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-10-17 19:30:05.005269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.889 [2024-10-17 19:30:05.010341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.889 [2024-10-17 19:30:05.010409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-10-17 19:30:05.010431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.889 [2024-10-17 19:30:05.015465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.889 [2024-10-17 19:30:05.015529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-10-17 19:30:05.015552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.889 [2024-10-17 19:30:05.020532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.889 [2024-10-17 19:30:05.020599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-10-17 19:30:05.020621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.889 [2024-10-17 19:30:05.025640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.889 [2024-10-17 19:30:05.025717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-10-17 19:30:05.025739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.889 [2024-10-17 19:30:05.030870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.889 [2024-10-17 19:30:05.030934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-10-17 19:30:05.030957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.889 [2024-10-17 19:30:05.036017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.889 [2024-10-17 19:30:05.036081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-10-17 19:30:05.036104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.889 [2024-10-17 19:30:05.041217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.889 [2024-10-17 19:30:05.041286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-10-17 19:30:05.041309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.889 [2024-10-17 19:30:05.046269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.889 [2024-10-17 19:30:05.046343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-10-17 19:30:05.046367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.889 [2024-10-17 19:30:05.051529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.889 [2024-10-17 19:30:05.051612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-10-17 19:30:05.051634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.889 [2024-10-17 19:30:05.056715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.889 [2024-10-17 19:30:05.056781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-10-17 19:30:05.056803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.889 [2024-10-17 19:30:05.061791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.889 [2024-10-17 19:30:05.061863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-10-17 19:30:05.061887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.889 [2024-10-17 19:30:05.066971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.889 [2024-10-17 19:30:05.067035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-10-17 19:30:05.067057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:55.889 [2024-10-17 19:30:05.072270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.890 [2024-10-17 19:30:05.072351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.890 [2024-10-17 19:30:05.072373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:55.890 [2024-10-17 19:30:05.077481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.890 [2024-10-17 19:30:05.077554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.890 [2024-10-17 19:30:05.077581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.890 4968.50 IOPS, 621.06 MiB/s [2024-10-17T19:30:05.148Z] [2024-10-17 19:30:05.083837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14cc3d0) with pdu=0x2000166fef90 00:27:55.890 [2024-10-17 19:30:05.083902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.890 [2024-10-17 19:30:05.083927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:55.890 00:27:55.890 Latency(us) 00:27:55.890 [2024-10-17T19:30:05.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.890 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:55.890 nvme0n1 : 2.00 4967.70 620.96 0.00 0.00 3214.55 1951.19 8936.73 00:27:55.890 [2024-10-17T19:30:05.148Z] =================================================================================================================== 00:27:55.890 [2024-10-17T19:30:05.148Z] Total : 4967.70 620.96 0.00 0.00 3214.55 1951.19 8936.73 00:27:55.890 { 00:27:55.890 "results": [ 00:27:55.890 { 00:27:55.890 "job": "nvme0n1", 00:27:55.890 "core_mask": "0x2", 00:27:55.890 "workload": "randwrite", 00:27:55.890 "status": "finished", 00:27:55.890 "queue_depth": 16, 00:27:55.890 "io_size": 131072, 00:27:55.890 "runtime": 2.00435, 00:27:55.890 "iops": 4967.695262803402, 00:27:55.890 "mibps": 620.9619078504253, 00:27:55.890 "io_failed": 0, 00:27:55.890 "io_timeout": 0, 00:27:55.890 "avg_latency_us": 3214.547920421449, 00:27:55.890 "min_latency_us": 1951.1854545454546, 00:27:55.890 "max_latency_us": 8936.727272727272 00:27:55.890 } 00:27:55.890 ], 00:27:55.890 "core_count": 1 00:27:55.890 } 00:27:55.890 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:55.890 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:55.890 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:55.890 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:55.890 | .driver_specific 00:27:55.890 | .nvme_error 00:27:55.890 | .status_code 00:27:55.890 | .command_transient_transport_error' 00:27:56.458 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 321 > 0 )) 00:27:56.458 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80603 00:27:56.458 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 80603 ']' 00:27:56.458 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 80603 00:27:56.458 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:56.458 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:56.458 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80603 00:27:56.458 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:56.458 killing process with pid 80603 00:27:56.458 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:56.458 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80603' 00:27:56.458 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 80603 00:27:56.458 Received shutdown signal, test time was about 2.000000 seconds 00:27:56.458 00:27:56.458 Latency(us) 00:27:56.458 [2024-10-17T19:30:05.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.458 [2024-10-17T19:30:05.716Z] =================================================================================================================== 00:27:56.458 [2024-10-17T19:30:05.716Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:56.458 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 80603 00:27:56.458 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80426 00:27:56.458 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 80426 ']' 00:27:56.458 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 80426 00:27:56.458 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:56.458 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:56.458 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80426 00:27:56.458 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:56.458 killing process with pid 80426 00:27:56.458 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:56.458 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80426' 00:27:56.458 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 80426 00:27:56.458 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 80426 00:27:56.719 00:27:56.719 real 0m15.979s 00:27:56.719 user 0m30.500s 00:27:56.719 sys 0m5.387s 00:27:56.719 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:56.719 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:56.719 ************************************ 00:27:56.719 END TEST nvmf_digest_error 00:27:56.719 ************************************ 00:27:56.719 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:56.719 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:56.719 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:56.719 19:30:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:56.978 rmmod nvme_tcp 00:27:56.978 rmmod nvme_fabrics 00:27:56.978 rmmod nvme_keyring 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 80426 ']' 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 80426 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 80426 ']' 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 80426 00:27:56.978 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (80426) - No such process 00:27:56.978 Process with pid 80426 is not found 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 80426 is not found' 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:56.978 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:57.236 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:57.236 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:57.236 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:57.236 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:57.236 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:57.236 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.236 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:57.236 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.236 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:27:57.236 00:27:57.236 real 0m37.165s 00:27:57.236 user 1m10.084s 00:27:57.236 sys 0m11.065s 00:27:57.236 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:57.236 19:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:57.236 ************************************ 00:27:57.236 END TEST nvmf_digest 00:27:57.236 ************************************ 00:27:57.236 19:30:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:57.236 19:30:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:27:57.236 19:30:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:57.236 19:30:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:57.236 19:30:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:57.236 19:30:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.236 ************************************ 00:27:57.236 START TEST nvmf_host_multipath 00:27:57.236 ************************************ 00:27:57.237 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:57.497 * Looking for test storage... 00:27:57.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:57.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.497 --rc genhtml_branch_coverage=1 00:27:57.497 --rc genhtml_function_coverage=1 00:27:57.497 --rc genhtml_legend=1 00:27:57.497 --rc geninfo_all_blocks=1 00:27:57.497 --rc geninfo_unexecuted_blocks=1 00:27:57.497 00:27:57.497 ' 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:57.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.497 --rc genhtml_branch_coverage=1 00:27:57.497 --rc genhtml_function_coverage=1 00:27:57.497 --rc genhtml_legend=1 00:27:57.497 --rc geninfo_all_blocks=1 00:27:57.497 --rc geninfo_unexecuted_blocks=1 00:27:57.497 00:27:57.497 ' 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:57.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.497 --rc genhtml_branch_coverage=1 00:27:57.497 --rc genhtml_function_coverage=1 00:27:57.497 --rc genhtml_legend=1 00:27:57.497 --rc geninfo_all_blocks=1 00:27:57.497 --rc geninfo_unexecuted_blocks=1 00:27:57.497 00:27:57.497 ' 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:57.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.497 --rc genhtml_branch_coverage=1 00:27:57.497 --rc genhtml_function_coverage=1 00:27:57.497 --rc genhtml_legend=1 00:27:57.497 --rc geninfo_all_blocks=1 00:27:57.497 --rc geninfo_unexecuted_blocks=1 00:27:57.497 00:27:57.497 ' 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:57.497 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:57.498 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@458 -- # nvmf_veth_init 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:57.498 Cannot find device "nvmf_init_br" 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:57.498 Cannot find device "nvmf_init_br2" 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:57.498 Cannot find device "nvmf_tgt_br" 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:57.498 Cannot find device "nvmf_tgt_br2" 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:57.498 Cannot find device "nvmf_init_br" 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:57.498 Cannot find device "nvmf_init_br2" 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:57.498 Cannot find device "nvmf_tgt_br" 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:27:57.498 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:57.757 Cannot find device "nvmf_tgt_br2" 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:57.757 Cannot find device "nvmf_br" 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:57.757 Cannot find device "nvmf_init_if" 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:57.757 Cannot find device "nvmf_init_if2" 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:57.757 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:57.757 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:57.757 19:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:57.757 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:57.757 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:58.052 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:58.052 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:27:58.052 00:27:58.052 --- 10.0.0.3 ping statistics --- 00:27:58.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.052 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:58.052 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:58.052 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:27:58.052 00:27:58.052 --- 10.0.0.4 ping statistics --- 00:27:58.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.052 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:58.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:58.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:27:58.052 00:27:58.052 --- 10.0.0.1 ping statistics --- 00:27:58.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.052 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:58.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:58.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:27:58.052 00:27:58.052 --- 10.0.0.2 ping statistics --- 00:27:58.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.052 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # return 0 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # nvmfpid=80913 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # waitforlisten 80913 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 80913 ']' 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:58.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:58.052 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:58.052 [2024-10-17 19:30:07.131385] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:27:58.052 [2024-10-17 19:30:07.131519] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:58.052 [2024-10-17 19:30:07.273242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:58.345 [2024-10-17 19:30:07.335413] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:58.345 [2024-10-17 19:30:07.335496] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:58.345 [2024-10-17 19:30:07.335508] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:58.345 [2024-10-17 19:30:07.335517] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:58.345 [2024-10-17 19:30:07.335524] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:58.345 [2024-10-17 19:30:07.336706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.345 [2024-10-17 19:30:07.336719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.345 [2024-10-17 19:30:07.393990] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:58.345 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:58.345 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:27:58.345 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:58.345 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:58.345 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:58.345 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:58.345 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80913 00:27:58.345 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:58.603 [2024-10-17 19:30:07.743577] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:58.603 19:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:58.861 Malloc0 00:27:58.861 19:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:59.427 19:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:59.427 19:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:59.685 [2024-10-17 19:30:08.931547] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:59.943 19:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:27:59.943 [2024-10-17 19:30:09.195755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:28:00.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:00.201 19:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80964 00:28:00.201 19:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:00.201 19:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80964 /var/tmp/bdevperf.sock 00:28:00.201 19:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 80964 ']' 00:28:00.201 19:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:28:00.201 19:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:00.201 19:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:00.201 19:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:00.201 19:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:00.201 19:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:00.460 19:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:00.460 19:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:28:00.460 19:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:00.718 19:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:28:01.283 Nvme0n1 00:28:01.283 19:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:28:01.564 Nvme0n1 00:28:01.564 19:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:28:01.564 19:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:28:02.497 19:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:28:02.497 19:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:28:02.754 19:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:28:03.012 19:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:28:03.012 19:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80999 00:28:03.012 19:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80913 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:03.012 19:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:09.575 19:30:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:09.575 19:30:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:09.575 19:30:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:28:09.575 19:30:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:09.575 Attaching 4 probes... 00:28:09.575 @path[10.0.0.3, 4421]: 15334 00:28:09.575 @path[10.0.0.3, 4421]: 16008 00:28:09.575 @path[10.0.0.3, 4421]: 16746 00:28:09.575 @path[10.0.0.3, 4421]: 16248 00:28:09.575 @path[10.0.0.3, 4421]: 17220 00:28:09.575 19:30:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:09.575 19:30:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:28:09.575 19:30:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:09.575 19:30:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:28:09.575 19:30:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:09.575 19:30:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:09.575 19:30:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80999 00:28:09.575 19:30:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:09.575 19:30:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:28:09.575 19:30:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:28:09.832 19:30:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:28:10.089 19:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:28:10.089 19:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81118 00:28:10.089 19:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:10.089 19:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80913 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:16.724 19:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:16.724 19:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:28:16.724 19:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:28:16.724 19:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:16.724 Attaching 4 probes... 00:28:16.724 @path[10.0.0.3, 4420]: 16847 00:28:16.724 @path[10.0.0.3, 4420]: 17280 00:28:16.724 @path[10.0.0.3, 4420]: 17332 00:28:16.724 @path[10.0.0.3, 4420]: 17347 00:28:16.724 @path[10.0.0.3, 4420]: 17157 00:28:16.724 19:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:28:16.724 19:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:16.724 19:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:16.724 19:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:28:16.724 19:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:28:16.724 19:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:28:16.724 19:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81118 00:28:16.724 19:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:16.724 19:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:28:16.724 19:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:28:16.724 19:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:28:16.982 19:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:28:16.982 19:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81232 00:28:16.982 19:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80913 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:16.982 19:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:23.538 19:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:23.538 19:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:23.538 19:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:28:23.538 19:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:23.538 Attaching 4 probes... 00:28:23.538 @path[10.0.0.3, 4421]: 13910 00:28:23.538 @path[10.0.0.3, 4421]: 16931 00:28:23.538 @path[10.0.0.3, 4421]: 16923 00:28:23.538 @path[10.0.0.3, 4421]: 16958 00:28:23.538 @path[10.0.0.3, 4421]: 16929 00:28:23.538 19:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:23.538 19:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:23.538 19:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:28:23.538 19:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:28:23.538 19:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:23.538 19:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:23.538 19:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81232 00:28:23.538 19:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:23.539 19:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:28:23.539 19:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:28:23.539 19:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:28:24.105 19:30:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:28:24.105 19:30:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81349 00:28:24.105 19:30:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:24.105 19:30:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80913 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:30.659 19:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:30.659 19:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:28:30.659 19:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:28:30.659 19:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:30.659 Attaching 4 probes... 00:28:30.659 00:28:30.659 00:28:30.659 00:28:30.659 00:28:30.659 00:28:30.659 19:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:30.659 19:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:30.659 19:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:28:30.659 19:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:28:30.659 19:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:28:30.659 19:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:28:30.659 19:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81349 00:28:30.659 19:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:30.659 19:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:28:30.659 19:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:28:30.659 19:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:28:30.918 19:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:28:30.918 19:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81467 00:28:30.918 19:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80913 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:30.918 19:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:37.504 19:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:37.504 19:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:37.504 19:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:28:37.504 19:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:37.504 Attaching 4 probes... 00:28:37.504 @path[10.0.0.3, 4421]: 16879 00:28:37.504 @path[10.0.0.3, 4421]: 16997 00:28:37.504 @path[10.0.0.3, 4421]: 17177 00:28:37.504 @path[10.0.0.3, 4421]: 16988 00:28:37.504 @path[10.0.0.3, 4421]: 17072 00:28:37.504 19:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:28:37.504 19:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:37.504 19:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:37.504 19:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:28:37.504 19:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:37.504 19:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:37.504 19:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81467 00:28:37.504 19:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:37.504 19:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:28:37.504 19:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:28:38.438 19:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:28:38.438 19:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81586 00:28:38.438 19:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80913 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:38.438 19:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:45.009 19:30:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:45.009 19:30:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:28:45.009 19:30:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:28:45.010 19:30:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:45.010 Attaching 4 probes... 00:28:45.010 @path[10.0.0.3, 4420]: 16775 00:28:45.010 @path[10.0.0.3, 4420]: 17093 00:28:45.010 @path[10.0.0.3, 4420]: 16851 00:28:45.010 @path[10.0.0.3, 4420]: 16570 00:28:45.010 @path[10.0.0.3, 4420]: 16837 00:28:45.010 19:30:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:45.010 19:30:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:28:45.010 19:30:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:45.010 19:30:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:28:45.010 19:30:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:28:45.010 19:30:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:28:45.010 19:30:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81586 00:28:45.010 19:30:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:45.010 19:30:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:28:45.010 [2024-10-17 19:30:54.191066] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:28:45.010 19:30:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:28:45.268 19:30:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:28:51.824 19:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:28:51.824 19:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81761 00:28:51.824 19:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:51.824 19:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80913 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:58.385 19:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:58.385 19:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:58.385 19:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:28:58.385 19:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:58.385 Attaching 4 probes... 00:28:58.385 @path[10.0.0.3, 4421]: 16365 00:28:58.385 @path[10.0.0.3, 4421]: 16666 00:28:58.385 @path[10.0.0.3, 4421]: 16096 00:28:58.385 @path[10.0.0.3, 4421]: 16326 00:28:58.385 @path[10.0.0.3, 4421]: 14508 00:28:58.385 19:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:28:58.385 19:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:58.385 19:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:58.385 19:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:28:58.385 19:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:58.385 19:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:58.385 19:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81761 00:28:58.385 19:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:58.385 19:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80964 00:28:58.385 19:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 80964 ']' 00:28:58.385 19:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 80964 00:28:58.385 19:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:28:58.385 19:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:58.385 19:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80964 00:28:58.385 killing process with pid 80964 00:28:58.385 19:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:28:58.385 19:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:28:58.385 19:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80964' 00:28:58.385 19:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 80964 00:28:58.385 19:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 80964 00:28:58.385 { 00:28:58.385 "results": [ 00:28:58.385 { 00:28:58.385 "job": "Nvme0n1", 00:28:58.385 "core_mask": "0x4", 00:28:58.385 "workload": "verify", 00:28:58.385 "status": "terminated", 00:28:58.385 "verify_range": { 00:28:58.385 "start": 0, 00:28:58.385 "length": 16384 00:28:58.385 }, 00:28:58.385 "queue_depth": 128, 00:28:58.385 "io_size": 4096, 00:28:58.385 "runtime": 56.031544, 00:28:58.385 "iops": 7155.183872855619, 00:28:58.385 "mibps": 27.94993700334226, 00:28:58.385 "io_failed": 0, 00:28:58.385 "io_timeout": 0, 00:28:58.385 "avg_latency_us": 17857.867843656208, 00:28:58.385 "min_latency_us": 707.4909090909091, 00:28:58.385 "max_latency_us": 7046430.72 00:28:58.385 } 00:28:58.385 ], 00:28:58.385 "core_count": 1 00:28:58.385 } 00:28:58.385 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80964 00:28:58.385 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:58.385 [2024-10-17 19:30:09.279869] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:28:58.385 [2024-10-17 19:30:09.280012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80964 ] 00:28:58.385 [2024-10-17 19:30:09.414124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.385 [2024-10-17 19:30:09.486465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:58.385 [2024-10-17 19:30:09.546272] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:58.385 Running I/O for 90 seconds... 00:28:58.385 7367.00 IOPS, 28.78 MiB/s [2024-10-17T19:31:07.643Z] 7726.50 IOPS, 30.18 MiB/s [2024-10-17T19:31:07.643Z] 7764.00 IOPS, 30.33 MiB/s [2024-10-17T19:31:07.643Z] 7826.50 IOPS, 30.57 MiB/s [2024-10-17T19:31:07.643Z] 7932.00 IOPS, 30.98 MiB/s [2024-10-17T19:31:07.643Z] 7994.17 IOPS, 31.23 MiB/s [2024-10-17T19:31:07.643Z] 8059.00 IOPS, 31.48 MiB/s [2024-10-17T19:31:07.643Z] 8130.50 IOPS, 31.76 MiB/s [2024-10-17T19:31:07.643Z] [2024-10-17 19:30:19.184540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.385 [2024-10-17 19:30:19.184647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:58.385 [2024-10-17 19:30:19.184713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.385 [2024-10-17 19:30:19.184737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:58.385 [2024-10-17 19:30:19.184763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.385 [2024-10-17 19:30:19.184779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:58.385 [2024-10-17 19:30:19.184804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.385 [2024-10-17 19:30:19.184822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:58.385 [2024-10-17 19:30:19.184845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.385 [2024-10-17 19:30:19.184862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:58.385 [2024-10-17 19:30:19.184884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.385 [2024-10-17 19:30:19.184901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:58.385 [2024-10-17 19:30:19.184924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.385 [2024-10-17 19:30:19.184941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:58.385 [2024-10-17 19:30:19.184963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.385 [2024-10-17 19:30:19.184980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:58.385 [2024-10-17 19:30:19.185003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.385 [2024-10-17 19:30:19.185020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:58.385 [2024-10-17 19:30:19.185043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.385 [2024-10-17 19:30:19.185100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:58.385 [2024-10-17 19:30:19.185126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.385 [2024-10-17 19:30:19.185161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:58.385 [2024-10-17 19:30:19.185185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.385 [2024-10-17 19:30:19.185202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:58.385 [2024-10-17 19:30:19.185224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.385 [2024-10-17 19:30:19.185240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:58.385 [2024-10-17 19:30:19.185262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.385 [2024-10-17 19:30:19.185278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:58.385 [2024-10-17 19:30:19.185300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.385 [2024-10-17 19:30:19.185316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:58.385 [2024-10-17 19:30:19.185338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.385 [2024-10-17 19:30:19.185353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:58.385 [2024-10-17 19:30:19.185381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.385 [2024-10-17 19:30:19.185397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:58.385 [2024-10-17 19:30:19.185421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.385 [2024-10-17 19:30:19.185438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:58.385 [2024-10-17 19:30:19.185460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.385 [2024-10-17 19:30:19.185475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:58.385 [2024-10-17 19:30:19.185497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.385 [2024-10-17 19:30:19.185514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:58.385 [2024-10-17 19:30:19.185536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.386 [2024-10-17 19:30:19.185552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.185574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.386 [2024-10-17 19:30:19.185599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.185623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.386 [2024-10-17 19:30:19.185640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.185662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.386 [2024-10-17 19:30:19.185678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.185705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.386 [2024-10-17 19:30:19.185722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.185746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.386 [2024-10-17 19:30:19.185762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.185784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.386 [2024-10-17 19:30:19.185800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.185822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.386 [2024-10-17 19:30:19.185838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.185861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.386 [2024-10-17 19:30:19.185878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.185900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.386 [2024-10-17 19:30:19.185917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.185939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.386 [2024-10-17 19:30:19.185955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.185977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.386 [2024-10-17 19:30:19.185993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.186029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.386 [2024-10-17 19:30:19.186048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.186076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.386 [2024-10-17 19:30:19.186105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.186164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.386 [2024-10-17 19:30:19.186183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.186205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.386 [2024-10-17 19:30:19.186221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.186244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.386 [2024-10-17 19:30:19.186260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.186282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.386 [2024-10-17 19:30:19.186299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.186322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.386 [2024-10-17 19:30:19.186338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.186360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.386 [2024-10-17 19:30:19.186376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.186397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.386 [2024-10-17 19:30:19.186414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.186437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.386 [2024-10-17 19:30:19.186453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.186475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.386 [2024-10-17 19:30:19.186491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.186514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.386 [2024-10-17 19:30:19.186535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.186558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.386 [2024-10-17 19:30:19.186573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.186596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.386 [2024-10-17 19:30:19.186612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.186643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.386 [2024-10-17 19:30:19.186660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.186682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.386 [2024-10-17 19:30:19.186698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.186722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.386 [2024-10-17 19:30:19.186739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.186762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.386 [2024-10-17 19:30:19.186778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.186800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.386 [2024-10-17 19:30:19.186816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.186839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.386 [2024-10-17 19:30:19.186855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.186878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.386 [2024-10-17 19:30:19.186894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.186917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.386 [2024-10-17 19:30:19.186933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.186955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.386 [2024-10-17 19:30:19.186970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.186993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.386 [2024-10-17 19:30:19.187010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.187037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.386 [2024-10-17 19:30:19.187055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.187077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.386 [2024-10-17 19:30:19.187094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.187117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.386 [2024-10-17 19:30:19.187154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.187180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.386 [2024-10-17 19:30:19.187197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.187220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.386 [2024-10-17 19:30:19.187236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.386 [2024-10-17 19:30:19.187258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.386 [2024-10-17 19:30:19.187275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.187298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.387 [2024-10-17 19:30:19.187314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.187336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.387 [2024-10-17 19:30:19.187354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.187377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.387 [2024-10-17 19:30:19.187393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.187416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.387 [2024-10-17 19:30:19.187432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.187455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.387 [2024-10-17 19:30:19.187471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.187493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.387 [2024-10-17 19:30:19.187510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.187533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.387 [2024-10-17 19:30:19.187549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.187571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.387 [2024-10-17 19:30:19.187588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.187610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.387 [2024-10-17 19:30:19.187633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.187657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.387 [2024-10-17 19:30:19.187674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.187696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.387 [2024-10-17 19:30:19.187712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.187734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.387 [2024-10-17 19:30:19.187751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.187773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.387 [2024-10-17 19:30:19.187790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.187812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.387 [2024-10-17 19:30:19.187828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.187850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.387 [2024-10-17 19:30:19.187866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.187888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.387 [2024-10-17 19:30:19.187905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.187927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.387 [2024-10-17 19:30:19.187943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.187966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.387 [2024-10-17 19:30:19.187982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.188014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.387 [2024-10-17 19:30:19.188031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.188054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.387 [2024-10-17 19:30:19.188070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.188093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.387 [2024-10-17 19:30:19.188110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.188152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.387 [2024-10-17 19:30:19.188171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.188192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.387 [2024-10-17 19:30:19.188209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.188234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.387 [2024-10-17 19:30:19.188251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.188273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.387 [2024-10-17 19:30:19.188289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.188312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.387 [2024-10-17 19:30:19.188329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.188351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.387 [2024-10-17 19:30:19.188367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.188391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.387 [2024-10-17 19:30:19.188406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.188429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.387 [2024-10-17 19:30:19.188444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.188467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.387 [2024-10-17 19:30:19.188484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.188506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.387 [2024-10-17 19:30:19.188522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.188545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.387 [2024-10-17 19:30:19.188561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.188583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.387 [2024-10-17 19:30:19.188600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.188630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.387 [2024-10-17 19:30:19.188647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.188679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.387 [2024-10-17 19:30:19.188708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.188733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.387 [2024-10-17 19:30:19.188750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.188773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.387 [2024-10-17 19:30:19.188789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:58.387 [2024-10-17 19:30:19.188811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.388 [2024-10-17 19:30:19.188828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:58.388 [2024-10-17 19:30:19.188851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.388 [2024-10-17 19:30:19.188867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:58.388 [2024-10-17 19:30:19.188890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.388 [2024-10-17 19:30:19.188906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:58.388 [2024-10-17 19:30:19.188928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.388 [2024-10-17 19:30:19.188944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:58.388 [2024-10-17 19:30:19.188966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.388 [2024-10-17 19:30:19.188983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:58.388 [2024-10-17 19:30:19.189004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.388 [2024-10-17 19:30:19.189020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:58.388 [2024-10-17 19:30:19.189042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.388 [2024-10-17 19:30:19.189058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:58.388 [2024-10-17 19:30:19.189081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.388 [2024-10-17 19:30:19.189097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:58.388 [2024-10-17 19:30:19.189118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.388 [2024-10-17 19:30:19.189160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:58.388 [2024-10-17 19:30:19.189185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.388 [2024-10-17 19:30:19.189201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:58.388 [2024-10-17 19:30:19.189224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.388 [2024-10-17 19:30:19.189239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:58.388 [2024-10-17 19:30:19.189262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.388 [2024-10-17 19:30:19.189279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:58.388 [2024-10-17 19:30:19.190721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.388 [2024-10-17 19:30:19.190755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:58.388 [2024-10-17 19:30:19.190792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.388 [2024-10-17 19:30:19.190811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:58.388 [2024-10-17 19:30:19.190834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.388 [2024-10-17 19:30:19.190850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:58.388 [2024-10-17 19:30:19.190873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.388 [2024-10-17 19:30:19.190890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:58.388 [2024-10-17 19:30:19.190915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.388 [2024-10-17 19:30:19.190930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:58.388 [2024-10-17 19:30:19.190953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.388 [2024-10-17 19:30:19.190970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:58.388 [2024-10-17 19:30:19.190993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.388 [2024-10-17 19:30:19.191008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:58.388 [2024-10-17 19:30:19.191030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.388 [2024-10-17 19:30:19.191046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:58.388 [2024-10-17 19:30:19.191228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.388 [2024-10-17 19:30:19.191272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:58.388 [2024-10-17 19:30:19.191301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.388 [2024-10-17 19:30:19.191327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:58.388 [2024-10-17 19:30:19.191362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.388 [2024-10-17 19:30:19.191380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:58.388 [2024-10-17 19:30:19.191403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.388 [2024-10-17 19:30:19.191419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:58.388 [2024-10-17 19:30:19.191441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.388 [2024-10-17 19:30:19.191457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:58.388 [2024-10-17 19:30:19.191479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.388 [2024-10-17 19:30:19.191496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:58.388 [2024-10-17 19:30:19.191518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.388 [2024-10-17 19:30:19.191534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.388 [2024-10-17 19:30:19.191556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.388 [2024-10-17 19:30:19.191573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.389 [2024-10-17 19:30:19.191600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.389 [2024-10-17 19:30:19.191618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:58.389 8176.56 IOPS, 31.94 MiB/s [2024-10-17T19:31:07.647Z] 8210.50 IOPS, 32.07 MiB/s [2024-10-17T19:31:07.647Z] 8250.64 IOPS, 32.23 MiB/s [2024-10-17T19:31:07.647Z] 8286.33 IOPS, 32.37 MiB/s [2024-10-17T19:31:07.647Z] 8315.92 IOPS, 32.48 MiB/s [2024-10-17T19:31:07.647Z] 8335.79 IOPS, 32.56 MiB/s [2024-10-17T19:31:07.647Z] [2024-10-17 19:30:25.800811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.389 [2024-10-17 19:30:25.800913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.389 [2024-10-17 19:30:25.800974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.389 [2024-10-17 19:30:25.800997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:58.389 [2024-10-17 19:30:25.801021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.389 [2024-10-17 19:30:25.801038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:58.389 [2024-10-17 19:30:25.801101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.389 [2024-10-17 19:30:25.801119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:58.389 [2024-10-17 19:30:25.801156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.389 [2024-10-17 19:30:25.801173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:58.389 [2024-10-17 19:30:25.801195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.389 [2024-10-17 19:30:25.801213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:58.389 [2024-10-17 19:30:25.801235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.389 [2024-10-17 19:30:25.801250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:58.389 [2024-10-17 19:30:25.801272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.389 [2024-10-17 19:30:25.801287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:58.389 [2024-10-17 19:30:25.801330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.389 [2024-10-17 19:30:25.801350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:58.389 [2024-10-17 19:30:25.801374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.389 [2024-10-17 19:30:25.801389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:58.389 [2024-10-17 19:30:25.801411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.389 [2024-10-17 19:30:25.801427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:58.389 [2024-10-17 19:30:25.801449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.389 [2024-10-17 19:30:25.801465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:58.389 [2024-10-17 19:30:25.801487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.389 [2024-10-17 19:30:25.801502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:58.389 [2024-10-17 19:30:25.801523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.389 [2024-10-17 19:30:25.801539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:58.389 [2024-10-17 19:30:25.801570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.389 [2024-10-17 19:30:25.801585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:58.389 [2024-10-17 19:30:25.801607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.389 [2024-10-17 19:30:25.801634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:58.389 [2024-10-17 19:30:25.801658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.389 [2024-10-17 19:30:25.801675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:58.389 [2024-10-17 19:30:25.801699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.389 [2024-10-17 19:30:25.801715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:58.389 [2024-10-17 19:30:25.801738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.389 [2024-10-17 19:30:25.801754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:58.389 [2024-10-17 19:30:25.801777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.389 [2024-10-17 19:30:25.801792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:58.389 [2024-10-17 19:30:25.801815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.389 [2024-10-17 19:30:25.801830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:58.389 [2024-10-17 19:30:25.801853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.389 [2024-10-17 19:30:25.801869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:58.389 [2024-10-17 19:30:25.801891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.389 [2024-10-17 19:30:25.801907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:58.390 [2024-10-17 19:30:25.801929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.390 [2024-10-17 19:30:25.801945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:58.390 [2024-10-17 19:30:25.801971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.390 [2024-10-17 19:30:25.801988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:58.390 [2024-10-17 19:30:25.802011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.390 [2024-10-17 19:30:25.802040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.390 [2024-10-17 19:30:25.802074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.390 [2024-10-17 19:30:25.802090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:58.390 [2024-10-17 19:30:25.802112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.390 [2024-10-17 19:30:25.802151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:58.390 [2024-10-17 19:30:25.802175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.390 [2024-10-17 19:30:25.802192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:58.390 [2024-10-17 19:30:25.802214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.390 [2024-10-17 19:30:25.802230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:58.390 [2024-10-17 19:30:25.802251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.390 [2024-10-17 19:30:25.802267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.390 [2024-10-17 19:30:25.802289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.390 [2024-10-17 19:30:25.802304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.390 [2024-10-17 19:30:25.802326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.390 [2024-10-17 19:30:25.802342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.390 [2024-10-17 19:30:25.802366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.390 [2024-10-17 19:30:25.802383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:58.390 [2024-10-17 19:30:25.802404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.390 [2024-10-17 19:30:25.802421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:58.390 [2024-10-17 19:30:25.802443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.390 [2024-10-17 19:30:25.802459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:58.390 [2024-10-17 19:30:25.802481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.390 [2024-10-17 19:30:25.802497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:58.390 [2024-10-17 19:30:25.802519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.390 [2024-10-17 19:30:25.802534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:58.390 [2024-10-17 19:30:25.802574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.390 [2024-10-17 19:30:25.802589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:58.390 [2024-10-17 19:30:25.802610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.390 [2024-10-17 19:30:25.802641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:58.390 [2024-10-17 19:30:25.802671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.390 [2024-10-17 19:30:25.802687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:58.390 [2024-10-17 19:30:25.802710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.390 [2024-10-17 19:30:25.802727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:58.390 [2024-10-17 19:30:25.802752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:85000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.390 [2024-10-17 19:30:25.802768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:58.390 [2024-10-17 19:30:25.802790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.390 [2024-10-17 19:30:25.802806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:58.390 [2024-10-17 19:30:25.802827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.390 [2024-10-17 19:30:25.802843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:58.390 [2024-10-17 19:30:25.802865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.390 [2024-10-17 19:30:25.802881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:58.390 [2024-10-17 19:30:25.802902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.390 [2024-10-17 19:30:25.802918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:58.390 [2024-10-17 19:30:25.802940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.390 [2024-10-17 19:30:25.802956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:58.390 [2024-10-17 19:30:25.802993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.391 [2024-10-17 19:30:25.803027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:58.391 [2024-10-17 19:30:25.803050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.391 [2024-10-17 19:30:25.803066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:58.391 [2024-10-17 19:30:25.803087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.391 [2024-10-17 19:30:25.803103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:58.391 [2024-10-17 19:30:25.803124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.391 [2024-10-17 19:30:25.803139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:58.391 [2024-10-17 19:30:25.803180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.391 [2024-10-17 19:30:25.803199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:58.391 [2024-10-17 19:30:25.803221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.391 [2024-10-17 19:30:25.803236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:58.391 [2024-10-17 19:30:25.803257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.391 [2024-10-17 19:30:25.803273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:58.391 [2024-10-17 19:30:25.803295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.391 [2024-10-17 19:30:25.803311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:58.391 [2024-10-17 19:30:25.803336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.391 [2024-10-17 19:30:25.803353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:58.391 [2024-10-17 19:30:25.803375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.391 [2024-10-17 19:30:25.803391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:58.391 [2024-10-17 19:30:25.803412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.391 [2024-10-17 19:30:25.803428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:58.391 [2024-10-17 19:30:25.803450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.391 [2024-10-17 19:30:25.803465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:58.391 [2024-10-17 19:30:25.803500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.391 [2024-10-17 19:30:25.803516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:58.391 [2024-10-17 19:30:25.803537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.391 [2024-10-17 19:30:25.803552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:58.391 [2024-10-17 19:30:25.803572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.391 [2024-10-17 19:30:25.803587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:58.391 [2024-10-17 19:30:25.803607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.391 [2024-10-17 19:30:25.803623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.391 [2024-10-17 19:30:25.803663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.391 [2024-10-17 19:30:25.803688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.391 [2024-10-17 19:30:25.803721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.391 [2024-10-17 19:30:25.803738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:58.391 [2024-10-17 19:30:25.803760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.391 [2024-10-17 19:30:25.803776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:58.391 [2024-10-17 19:30:25.803798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.391 [2024-10-17 19:30:25.803814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:58.391 [2024-10-17 19:30:25.803836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.391 [2024-10-17 19:30:25.803852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:58.391 [2024-10-17 19:30:25.803874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.391 [2024-10-17 19:30:25.803890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:58.391 [2024-10-17 19:30:25.803912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.391 [2024-10-17 19:30:25.803927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:58.391 [2024-10-17 19:30:25.803949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.391 [2024-10-17 19:30:25.803966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:58.391 [2024-10-17 19:30:25.803987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.391 [2024-10-17 19:30:25.804003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:58.391 [2024-10-17 19:30:25.804039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.391 [2024-10-17 19:30:25.804055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:58.391 [2024-10-17 19:30:25.804076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.391 [2024-10-17 19:30:25.804092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:58.391 [2024-10-17 19:30:25.804112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:85120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.391 [2024-10-17 19:30:25.804127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:58.391 [2024-10-17 19:30:25.804148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-10-17 19:30:25.804198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:58.392 [2024-10-17 19:30:25.804223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:85136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-10-17 19:30:25.804238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:58.392 [2024-10-17 19:30:25.804259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-10-17 19:30:25.804274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:58.392 [2024-10-17 19:30:25.804295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-10-17 19:30:25.804310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:58.392 [2024-10-17 19:30:25.804346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.392 [2024-10-17 19:30:25.804365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:58.392 [2024-10-17 19:30:25.804387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.392 [2024-10-17 19:30:25.804403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:58.392 [2024-10-17 19:30:25.804424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.392 [2024-10-17 19:30:25.804439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:58.392 [2024-10-17 19:30:25.804460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.392 [2024-10-17 19:30:25.804476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:58.392 [2024-10-17 19:30:25.804497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.392 [2024-10-17 19:30:25.804512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:58.392 [2024-10-17 19:30:25.804532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.392 [2024-10-17 19:30:25.804548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:58.392 [2024-10-17 19:30:25.804569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.392 [2024-10-17 19:30:25.804583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:58.392 [2024-10-17 19:30:25.804620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.392 [2024-10-17 19:30:25.804652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:58.392 [2024-10-17 19:30:25.804674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.392 [2024-10-17 19:30:25.804691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:58.392 [2024-10-17 19:30:25.804721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.392 [2024-10-17 19:30:25.804738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:58.392 [2024-10-17 19:30:25.804760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.392 [2024-10-17 19:30:25.804776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:58.392 [2024-10-17 19:30:25.804799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.392 [2024-10-17 19:30:25.804814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:58.392 [2024-10-17 19:30:25.804836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.392 [2024-10-17 19:30:25.804852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:58.392 [2024-10-17 19:30:25.804874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.392 [2024-10-17 19:30:25.804891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:58.392 [2024-10-17 19:30:25.804913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.392 [2024-10-17 19:30:25.804928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:58.392 [2024-10-17 19:30:25.804950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:85160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-10-17 19:30:25.804967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.392 [2024-10-17 19:30:25.804990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-10-17 19:30:25.805021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.392 [2024-10-17 19:30:25.805058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-10-17 19:30:25.805074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:58.392 [2024-10-17 19:30:25.805097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-10-17 19:30:25.805114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:58.392 [2024-10-17 19:30:25.805136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:85192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-10-17 19:30:25.805152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:58.392 [2024-10-17 19:30:25.805173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-10-17 19:30:25.805200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:58.392 [2024-10-17 19:30:25.805232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-10-17 19:30:25.805249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:58.392 [2024-10-17 19:30:25.805271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-10-17 19:30:25.805287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:58.392 [2024-10-17 19:30:25.805309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:85224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-10-17 19:30:25.805325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:58.392 [2024-10-17 19:30:25.805349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:85232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.392 [2024-10-17 19:30:25.805365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:58.393 [2024-10-17 19:30:25.805387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-10-17 19:30:25.805403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:58.393 [2024-10-17 19:30:25.805426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-10-17 19:30:25.805441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:58.393 [2024-10-17 19:30:25.805463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-10-17 19:30:25.805479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:58.393 [2024-10-17 19:30:25.805501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:85264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-10-17 19:30:25.805516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:58.393 [2024-10-17 19:30:25.805539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-10-17 19:30:25.805554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:58.393 8353.40 IOPS, 32.63 MiB/s [2024-10-17T19:31:07.651Z] [2024-10-17 19:30:25.807902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-10-17 19:30:25.807927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:58.393 [2024-10-17 19:30:25.807963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.393 [2024-10-17 19:30:25.807980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:58.393 [2024-10-17 19:30:25.808010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.393 [2024-10-17 19:30:25.808027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:58.393 [2024-10-17 19:30:25.808068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.393 [2024-10-17 19:30:25.808086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:58.393 [2024-10-17 19:30:25.808115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.393 [2024-10-17 19:30:25.808146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:58.393 [2024-10-17 19:30:25.808178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.393 [2024-10-17 19:30:25.808195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:58.393 [2024-10-17 19:30:25.808224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.393 [2024-10-17 19:30:25.808240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:58.393 [2024-10-17 19:30:25.808269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.393 [2024-10-17 19:30:25.808285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:58.393 [2024-10-17 19:30:25.808329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.393 [2024-10-17 19:30:25.808348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:58.393 [2024-10-17 19:30:25.808378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.393 [2024-10-17 19:30:25.808395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:58.393 [2024-10-17 19:30:25.808424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:85288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-10-17 19:30:25.808440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:58.393 [2024-10-17 19:30:25.808468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-10-17 19:30:25.808484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:58.393 [2024-10-17 19:30:25.808514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-10-17 19:30:25.808530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:58.393 [2024-10-17 19:30:25.808559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:85312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-10-17 19:30:25.808575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:58.393 [2024-10-17 19:30:25.808603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:85320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-10-17 19:30:25.808619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.393 [2024-10-17 19:30:25.808648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-10-17 19:30:25.808672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:58.393 [2024-10-17 19:30:25.808704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-10-17 19:30:25.808727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:58.393 [2024-10-17 19:30:25.808756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:85344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-10-17 19:30:25.808772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.393 7831.31 IOPS, 30.59 MiB/s [2024-10-17T19:31:07.651Z] 7869.53 IOPS, 30.74 MiB/s [2024-10-17T19:31:07.651Z] 7903.67 IOPS, 30.87 MiB/s [2024-10-17T19:31:07.651Z] 7933.26 IOPS, 30.99 MiB/s [2024-10-17T19:31:07.651Z] 7961.10 IOPS, 31.10 MiB/s [2024-10-17T19:31:07.651Z] 7984.29 IOPS, 31.19 MiB/s [2024-10-17T19:31:07.651Z] 8007.55 IOPS, 31.28 MiB/s [2024-10-17T19:31:07.651Z] [2024-10-17 19:30:33.070119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:117168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.393 [2024-10-17 19:30:33.070222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:58.393 [2024-10-17 19:30:33.070289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:117176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.393 [2024-10-17 19:30:33.070312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:58.393 [2024-10-17 19:30:33.070337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:117184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.393 [2024-10-17 19:30:33.070354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:58.393 [2024-10-17 19:30:33.070375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:117192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.393 [2024-10-17 19:30:33.070391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:58.393 [2024-10-17 19:30:33.070414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:116656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.393 [2024-10-17 19:30:33.070429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.070451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:116664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-10-17 19:30:33.070467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.070489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-10-17 19:30:33.070505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.070527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:116680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-10-17 19:30:33.070542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.070564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:116688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-10-17 19:30:33.070590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.070650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:116696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-10-17 19:30:33.070667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.070689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:116704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-10-17 19:30:33.070705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.070726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:116712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-10-17 19:30:33.070741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.070763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:117200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.394 [2024-10-17 19:30:33.070778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.070800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:117208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.394 [2024-10-17 19:30:33.070815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.070836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:117216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.394 [2024-10-17 19:30:33.070851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.070873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:117224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.394 [2024-10-17 19:30:33.070889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.071003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.394 [2024-10-17 19:30:33.071029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.071057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:117240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.394 [2024-10-17 19:30:33.071074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.071098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:117248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.394 [2024-10-17 19:30:33.071124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.071166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:117256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.394 [2024-10-17 19:30:33.071183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.071205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:117264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.394 [2024-10-17 19:30:33.071221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.071256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.394 [2024-10-17 19:30:33.071274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.071297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:117280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.394 [2024-10-17 19:30:33.071313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.071336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:117288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.394 [2024-10-17 19:30:33.071352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.071375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:117296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.394 [2024-10-17 19:30:33.071391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.071413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.394 [2024-10-17 19:30:33.071429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.071452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:117312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.394 [2024-10-17 19:30:33.071468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.071491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:117320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.394 [2024-10-17 19:30:33.071506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.071529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:117328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.394 [2024-10-17 19:30:33.071545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.071568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:117336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.394 [2024-10-17 19:30:33.071584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.071607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:116720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-10-17 19:30:33.071625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.071648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:116728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-10-17 19:30:33.071664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.071692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:116736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-10-17 19:30:33.071708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.071732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:116744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-10-17 19:30:33.071756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.071790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:116752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-10-17 19:30:33.071806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.071829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:116760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-10-17 19:30:33.071845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.071868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:116768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-10-17 19:30:33.071884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.071907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:116776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.394 [2024-10-17 19:30:33.071923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.071945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.394 [2024-10-17 19:30:33.071961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.071984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:117352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.394 [2024-10-17 19:30:33.072000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.072039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:117360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.394 [2024-10-17 19:30:33.072059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:58.394 [2024-10-17 19:30:33.072082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:117368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.394 [2024-10-17 19:30:33.072099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.072122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.395 [2024-10-17 19:30:33.072152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.072176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:117384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.395 [2024-10-17 19:30:33.072201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.072224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:117392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.395 [2024-10-17 19:30:33.072241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.072264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:117400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.395 [2024-10-17 19:30:33.072294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.072320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.395 [2024-10-17 19:30:33.072337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.072360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:117416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.395 [2024-10-17 19:30:33.072376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.072403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:117424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.395 [2024-10-17 19:30:33.072420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.072444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:117432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.395 [2024-10-17 19:30:33.072460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.072483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:117440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.395 [2024-10-17 19:30:33.072499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.072522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:117448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.395 [2024-10-17 19:30:33.072538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.072561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:117456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.395 [2024-10-17 19:30:33.072576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.072599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:117464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.395 [2024-10-17 19:30:33.072615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.072638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:117472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.395 [2024-10-17 19:30:33.072653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.072676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.395 [2024-10-17 19:30:33.072692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.072715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:116784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-10-17 19:30:33.072730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.072753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:116792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-10-17 19:30:33.072769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.072799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:116800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-10-17 19:30:33.072816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.072839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:116808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-10-17 19:30:33.072854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.072877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:116816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-10-17 19:30:33.072893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.072915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:116824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-10-17 19:30:33.072931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.072956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-10-17 19:30:33.072972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.072995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:116840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-10-17 19:30:33.073010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.073033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:116848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-10-17 19:30:33.073049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.073074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:116856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-10-17 19:30:33.073090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.073113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:116864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-10-17 19:30:33.073138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.073164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:116872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-10-17 19:30:33.073180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.073204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-10-17 19:30:33.073219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.073242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:116888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-10-17 19:30:33.073258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.073289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:116896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-10-17 19:30:33.073307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.073332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:116904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.395 [2024-10-17 19:30:33.073348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.073383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:117488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.395 [2024-10-17 19:30:33.073407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.073442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:117496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.395 [2024-10-17 19:30:33.073458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.073481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:117504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.395 [2024-10-17 19:30:33.073496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.073519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:117512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.395 [2024-10-17 19:30:33.073536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:58.395 [2024-10-17 19:30:33.073559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.396 [2024-10-17 19:30:33.073574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.073597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.396 [2024-10-17 19:30:33.073613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.073636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:117536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.396 [2024-10-17 19:30:33.073652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.073674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.396 [2024-10-17 19:30:33.073690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.073713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.396 [2024-10-17 19:30:33.073728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.073752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:117560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.396 [2024-10-17 19:30:33.073768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.073799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.396 [2024-10-17 19:30:33.073816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.073839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.396 [2024-10-17 19:30:33.073855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.073878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.396 [2024-10-17 19:30:33.073894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.073916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.396 [2024-10-17 19:30:33.073932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.073955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.396 [2024-10-17 19:30:33.073971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.073995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.396 [2024-10-17 19:30:33.074011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.074048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:116920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.396 [2024-10-17 19:30:33.074065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.074088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:116928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.396 [2024-10-17 19:30:33.074104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.074127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:116936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.396 [2024-10-17 19:30:33.074157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.074181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:116944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.396 [2024-10-17 19:30:33.074197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.074220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:116952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.396 [2024-10-17 19:30:33.074236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.074259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:116960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.396 [2024-10-17 19:30:33.074274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.074297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:116968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.396 [2024-10-17 19:30:33.074321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.074345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:116976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.396 [2024-10-17 19:30:33.074361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.074384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:116984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.396 [2024-10-17 19:30:33.074400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.074424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:116992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.396 [2024-10-17 19:30:33.074441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.074465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:117000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.396 [2024-10-17 19:30:33.074481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.074504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.396 [2024-10-17 19:30:33.074520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.074542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:117016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.396 [2024-10-17 19:30:33.074558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.074581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:117024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.396 [2024-10-17 19:30:33.074597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.074620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.396 [2024-10-17 19:30:33.074636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.074659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.396 [2024-10-17 19:30:33.074675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.074701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.396 [2024-10-17 19:30:33.074718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.074742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.396 [2024-10-17 19:30:33.074757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.074780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.396 [2024-10-17 19:30:33.074803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.074827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:117640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.396 [2024-10-17 19:30:33.074843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.074866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.396 [2024-10-17 19:30:33.074882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.074905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.396 [2024-10-17 19:30:33.074921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.074943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.396 [2024-10-17 19:30:33.074959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.074982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.396 [2024-10-17 19:30:33.074998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.075033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.396 [2024-10-17 19:30:33.075050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.075073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.396 [2024-10-17 19:30:33.075089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.075112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:117056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.396 [2024-10-17 19:30:33.075141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.075167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:117064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.396 [2024-10-17 19:30:33.075184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.396 [2024-10-17 19:30:33.075207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-10-17 19:30:33.075223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:33.075246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:117080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-10-17 19:30:33.075262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:33.075285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:117088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-10-17 19:30:33.075308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:33.075332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:117096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-10-17 19:30:33.075349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:33.075371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:117104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-10-17 19:30:33.075388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:33.075411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:117112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-10-17 19:30:33.075426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:33.075449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:117120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-10-17 19:30:33.075465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:33.075488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:117128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-10-17 19:30:33.075504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:33.075526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-10-17 19:30:33.075542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:33.075566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:117144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-10-17 19:30:33.075582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:33.075605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-10-17 19:30:33.075621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:33.075644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:117160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-10-17 19:30:33.075660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:58.397 7757.48 IOPS, 30.30 MiB/s [2024-10-17T19:31:07.655Z] 7434.25 IOPS, 29.04 MiB/s [2024-10-17T19:31:07.655Z] 7136.88 IOPS, 27.88 MiB/s [2024-10-17T19:31:07.655Z] 6862.38 IOPS, 26.81 MiB/s [2024-10-17T19:31:07.655Z] 6608.22 IOPS, 25.81 MiB/s [2024-10-17T19:31:07.655Z] 6372.21 IOPS, 24.89 MiB/s [2024-10-17T19:31:07.655Z] 6152.48 IOPS, 24.03 MiB/s [2024-10-17T19:31:07.655Z] 6153.47 IOPS, 24.04 MiB/s [2024-10-17T19:31:07.655Z] 6231.61 IOPS, 24.34 MiB/s [2024-10-17T19:31:07.655Z] 6303.38 IOPS, 24.62 MiB/s [2024-10-17T19:31:07.655Z] 6372.24 IOPS, 24.89 MiB/s [2024-10-17T19:31:07.655Z] 6434.47 IOPS, 25.13 MiB/s [2024-10-17T19:31:07.655Z] 6495.66 IOPS, 25.37 MiB/s [2024-10-17T19:31:07.655Z] [2024-10-17 19:30:46.564031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.397 [2024-10-17 19:30:46.564152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:46.564221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.397 [2024-10-17 19:30:46.564278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:46.564305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.397 [2024-10-17 19:30:46.564322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:46.564344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.397 [2024-10-17 19:30:46.564359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:46.564381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.397 [2024-10-17 19:30:46.564396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:46.564418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.397 [2024-10-17 19:30:46.564433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:46.564455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.397 [2024-10-17 19:30:46.564470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:46.564492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.397 [2024-10-17 19:30:46.564507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:46.564529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.397 [2024-10-17 19:30:46.564544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:46.564566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.397 [2024-10-17 19:30:46.564581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:46.564603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.397 [2024-10-17 19:30:46.564618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:46.564639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.397 [2024-10-17 19:30:46.564655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:46.564676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.397 [2024-10-17 19:30:46.564692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:46.564714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.397 [2024-10-17 19:30:46.564729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:46.564762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.397 [2024-10-17 19:30:46.564778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:46.564799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.397 [2024-10-17 19:30:46.564815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:46.564837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-10-17 19:30:46.564852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:46.564879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:35176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-10-17 19:30:46.564895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:46.564917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-10-17 19:30:46.564933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:46.564955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:35192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-10-17 19:30:46.564971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:46.564993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-10-17 19:30:46.565009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:46.565031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-10-17 19:30:46.565047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:46.565069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:35216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-10-17 19:30:46.565084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:46.565106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-10-17 19:30:46.565122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:46.565192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.397 [2024-10-17 19:30:46.565214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:46.565231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.397 [2024-10-17 19:30:46.565245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.397 [2024-10-17 19:30:46.565271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.397 [2024-10-17 19:30:46.565286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.565311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.398 [2024-10-17 19:30:46.565325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.565341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.398 [2024-10-17 19:30:46.565355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.565371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.398 [2024-10-17 19:30:46.565385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.565401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.398 [2024-10-17 19:30:46.565415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.565430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.398 [2024-10-17 19:30:46.565445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.565460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.398 [2024-10-17 19:30:46.565475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.565492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.398 [2024-10-17 19:30:46.565506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.565522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.398 [2024-10-17 19:30:46.565536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.565552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.398 [2024-10-17 19:30:46.565566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.565581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.398 [2024-10-17 19:30:46.565595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.565611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.398 [2024-10-17 19:30:46.565625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.565640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.398 [2024-10-17 19:30:46.565654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.565676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.398 [2024-10-17 19:30:46.565691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.565707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:35232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.398 [2024-10-17 19:30:46.565722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.565738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.398 [2024-10-17 19:30:46.565752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.565767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:35248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.398 [2024-10-17 19:30:46.565781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.565797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.398 [2024-10-17 19:30:46.565811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.565827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:35264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.398 [2024-10-17 19:30:46.565842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.565866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.398 [2024-10-17 19:30:46.565880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.565895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:35280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.398 [2024-10-17 19:30:46.565910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.565925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.398 [2024-10-17 19:30:46.565939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.565955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:35296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.398 [2024-10-17 19:30:46.565969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.565985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:35304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.398 [2024-10-17 19:30:46.565999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.566015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:35312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.398 [2024-10-17 19:30:46.566040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.566058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:35320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.398 [2024-10-17 19:30:46.566079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.566096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.398 [2024-10-17 19:30:46.566110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.566126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:35336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.398 [2024-10-17 19:30:46.566153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.566170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:35344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.398 [2024-10-17 19:30:46.566184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.566200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:35352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.398 [2024-10-17 19:30:46.566214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.566230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.398 [2024-10-17 19:30:46.566244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.566259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:35368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.398 [2024-10-17 19:30:46.566274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.566289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:35376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.398 [2024-10-17 19:30:46.566304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.566320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:35384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.398 [2024-10-17 19:30:46.566334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.566350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.398 [2024-10-17 19:30:46.566364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.566380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:35400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.398 [2024-10-17 19:30:46.566393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.566419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.398 [2024-10-17 19:30:46.566433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.566449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:35416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.398 [2024-10-17 19:30:46.566463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.566485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.398 [2024-10-17 19:30:46.566500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.566516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.398 [2024-10-17 19:30:46.566531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.566547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.398 [2024-10-17 19:30:46.566561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.566576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.398 [2024-10-17 19:30:46.566591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.398 [2024-10-17 19:30:46.566606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.398 [2024-10-17 19:30:46.566620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.566636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.399 [2024-10-17 19:30:46.566650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.566666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.399 [2024-10-17 19:30:46.566679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.566695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.399 [2024-10-17 19:30:46.566709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.566725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.399 [2024-10-17 19:30:46.566739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.566754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:35432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.399 [2024-10-17 19:30:46.566770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.566786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:35440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.399 [2024-10-17 19:30:46.566800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.566815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.399 [2024-10-17 19:30:46.566830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.566845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.399 [2024-10-17 19:30:46.566865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.566882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:35464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.399 [2024-10-17 19:30:46.566896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.566911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.399 [2024-10-17 19:30:46.566926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.566941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.399 [2024-10-17 19:30:46.566956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.566971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:35488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.399 [2024-10-17 19:30:46.566985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.567002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:35496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.399 [2024-10-17 19:30:46.567016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.567032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.399 [2024-10-17 19:30:46.567047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.567062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:35512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.399 [2024-10-17 19:30:46.567077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.567092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:35520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.399 [2024-10-17 19:30:46.567106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.567122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:35528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.399 [2024-10-17 19:30:46.567148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.567164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.399 [2024-10-17 19:30:46.567178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.567194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.399 [2024-10-17 19:30:46.567208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.567223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.399 [2024-10-17 19:30:46.567237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.567259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.399 [2024-10-17 19:30:46.567275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.567291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.399 [2024-10-17 19:30:46.567305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.567321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:36024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.399 [2024-10-17 19:30:46.567335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.567350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.399 [2024-10-17 19:30:46.567364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.567380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.399 [2024-10-17 19:30:46.567394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.567409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.399 [2024-10-17 19:30:46.567423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.567439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:36056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.399 [2024-10-17 19:30:46.567454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.567469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.399 [2024-10-17 19:30:46.567483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.567509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:35560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.399 [2024-10-17 19:30:46.567524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.567541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.399 [2024-10-17 19:30:46.567563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.567579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:35576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.399 [2024-10-17 19:30:46.567594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.567609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:35584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.399 [2024-10-17 19:30:46.567623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.567639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:35592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.399 [2024-10-17 19:30:46.567652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.399 [2024-10-17 19:30:46.567678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:35600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.399 [2024-10-17 19:30:46.567693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.400 [2024-10-17 19:30:46.567709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:35608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.400 [2024-10-17 19:30:46.567723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.400 [2024-10-17 19:30:46.567739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.400 [2024-10-17 19:30:46.567753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.400 [2024-10-17 19:30:46.567768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.400 [2024-10-17 19:30:46.567782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.400 [2024-10-17 19:30:46.567798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:35632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.400 [2024-10-17 19:30:46.567812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.400 [2024-10-17 19:30:46.567827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.400 [2024-10-17 19:30:46.567841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.400 [2024-10-17 19:30:46.567857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:35648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.400 [2024-10-17 19:30:46.567871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.400 [2024-10-17 19:30:46.567887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:35656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.400 [2024-10-17 19:30:46.567901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.400 [2024-10-17 19:30:46.567916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.400 [2024-10-17 19:30:46.567929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.400 [2024-10-17 19:30:46.567945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3320 is same with the state(6) to be set 00:28:58.400 [2024-10-17 19:30:46.567963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:58.400 [2024-10-17 19:30:46.567974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:58.400 [2024-10-17 19:30:46.567985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35672 len:8 PRP1 0x0 PRP2 0x0 00:28:58.400 [2024-10-17 19:30:46.568005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.400 [2024-10-17 19:30:46.568020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:58.400 [2024-10-17 19:30:46.568031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:58.400 [2024-10-17 19:30:46.568047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36064 len:8 PRP1 0x0 PRP2 0x0 00:28:58.400 [2024-10-17 19:30:46.568067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.400 [2024-10-17 19:30:46.568082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:58.400 [2024-10-17 19:30:46.568092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:58.400 [2024-10-17 19:30:46.568103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36072 len:8 PRP1 0x0 PRP2 0x0 00:28:58.400 [2024-10-17 19:30:46.568116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.400 [2024-10-17 19:30:46.568143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:58.400 [2024-10-17 19:30:46.568155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:58.400 [2024-10-17 19:30:46.568166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36080 len:8 PRP1 0x0 PRP2 0x0 00:28:58.400 [2024-10-17 19:30:46.568179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.400 [2024-10-17 19:30:46.568193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:58.400 [2024-10-17 19:30:46.568203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:58.400 [2024-10-17 19:30:46.568213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36088 len:8 PRP1 0x0 PRP2 0x0 00:28:58.400 [2024-10-17 19:30:46.568227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.400 [2024-10-17 19:30:46.568240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:58.400 [2024-10-17 19:30:46.568251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:58.400 [2024-10-17 19:30:46.568261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36096 len:8 PRP1 0x0 PRP2 0x0 00:28:58.400 [2024-10-17 19:30:46.568279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.400 [2024-10-17 19:30:46.568293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:58.400 [2024-10-17 19:30:46.568303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:58.400 [2024-10-17 19:30:46.568313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36104 len:8 PRP1 0x0 PRP2 0x0 00:28:58.400 [2024-10-17 19:30:46.568327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.400 [2024-10-17 19:30:46.568340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:58.400 [2024-10-17 19:30:46.568350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:58.400 [2024-10-17 19:30:46.568360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36112 len:8 PRP1 0x0 PRP2 0x0 00:28:58.400 [2024-10-17 19:30:46.568374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.400 [2024-10-17 19:30:46.568388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:58.400 [2024-10-17 19:30:46.568397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:58.400 [2024-10-17 19:30:46.568408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36120 len:8 PRP1 0x0 PRP2 0x0 00:28:58.400 [2024-10-17 19:30:46.568428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.400 [2024-10-17 19:30:46.568442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:58.400 [2024-10-17 19:30:46.568453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:58.400 [2024-10-17 19:30:46.568470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36128 len:8 PRP1 0x0 PRP2 0x0 00:28:58.400 [2024-10-17 19:30:46.568484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.400 [2024-10-17 19:30:46.568498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:58.400 [2024-10-17 19:30:46.568508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:58.400 [2024-10-17 19:30:46.568518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36136 len:8 PRP1 0x0 PRP2 0x0 00:28:58.400 [2024-10-17 19:30:46.568531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.400 [2024-10-17 19:30:46.568545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:58.400 [2024-10-17 19:30:46.568555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:58.400 [2024-10-17 19:30:46.568565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36144 len:8 PRP1 0x0 PRP2 0x0 00:28:58.400 [2024-10-17 19:30:46.568578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.400 [2024-10-17 19:30:46.568592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:58.400 [2024-10-17 19:30:46.568602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:58.400 [2024-10-17 19:30:46.568612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36152 len:8 PRP1 0x0 PRP2 0x0 00:28:58.400 [2024-10-17 19:30:46.568625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.400 [2024-10-17 19:30:46.568639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:58.400 [2024-10-17 19:30:46.568649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:58.400 [2024-10-17 19:30:46.568659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36160 len:8 PRP1 0x0 PRP2 0x0 00:28:58.400 [2024-10-17 19:30:46.568672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.400 [2024-10-17 19:30:46.568685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:58.400 [2024-10-17 19:30:46.568695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:58.400 [2024-10-17 19:30:46.568705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36168 len:8 PRP1 0x0 PRP2 0x0 00:28:58.400 [2024-10-17 19:30:46.568718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.400 [2024-10-17 19:30:46.568732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:58.400 [2024-10-17 19:30:46.568742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:58.400 [2024-10-17 19:30:46.568752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36176 len:8 PRP1 0x0 PRP2 0x0 00:28:58.400 [2024-10-17 19:30:46.568765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.400 [2024-10-17 19:30:46.568779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:58.400 [2024-10-17 19:30:46.568789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:58.400 [2024-10-17 19:30:46.568799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36184 len:8 PRP1 0x0 PRP2 0x0 00:28:58.400 [2024-10-17 19:30:46.568819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.400 [2024-10-17 19:30:46.568885] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16f3320 was disconnected and freed. reset controller. 00:28:58.400 [2024-10-17 19:30:46.570157] nvme_ctrlr.c:1770:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.400 [2024-10-17 19:30:46.570266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.401 [2024-10-17 19:30:46.570290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.401 [2024-10-17 19:30:46.570343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6d20 (9): Bad file descriptor 00:28:58.401 [2024-10-17 19:30:46.570803] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-10-17 19:30:46.570835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6d20 with addr=10.0.0.3, port=4421 00:28:58.401 [2024-10-17 19:30:46.570853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6d20 is same with the state(6) to be set 00:28:58.401 [2024-10-17 19:30:46.570886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6d20 (9): Bad file descriptor 00:28:58.401 [2024-10-17 19:30:46.570918] nvme_ctrlr.c:4250:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.401 [2024-10-17 19:30:46.570934] nvme_ctrlr.c:1868:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.401 [2024-10-17 19:30:46.570948] nvme_ctrlr.c:1152:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.401 [2024-10-17 19:30:46.570982] bdev_nvme.c:2213:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.401 [2024-10-17 19:30:46.570999] nvme_ctrlr.c:1770:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.401 6549.25 IOPS, 25.58 MiB/s [2024-10-17T19:31:07.659Z] 6576.78 IOPS, 25.69 MiB/s [2024-10-17T19:31:07.659Z] 6625.18 IOPS, 25.88 MiB/s [2024-10-17T19:31:07.659Z] 6675.62 IOPS, 26.08 MiB/s [2024-10-17T19:31:07.659Z] 6719.93 IOPS, 26.25 MiB/s [2024-10-17T19:31:07.659Z] 6759.73 IOPS, 26.41 MiB/s [2024-10-17T19:31:07.659Z] 6797.26 IOPS, 26.55 MiB/s [2024-10-17T19:31:07.659Z] 6835.65 IOPS, 26.70 MiB/s [2024-10-17T19:31:07.659Z] 6868.66 IOPS, 26.83 MiB/s [2024-10-17T19:31:07.659Z] 6901.09 IOPS, 26.96 MiB/s [2024-10-17T19:31:07.659Z] [2024-10-17 19:30:56.638925] bdev_nvme.c:2215:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:58.401 6933.83 IOPS, 27.09 MiB/s [2024-10-17T19:31:07.659Z] 6964.17 IOPS, 27.20 MiB/s [2024-10-17T19:31:07.659Z] 6994.92 IOPS, 27.32 MiB/s [2024-10-17T19:31:07.659Z] 7022.94 IOPS, 27.43 MiB/s [2024-10-17T19:31:07.659Z] 7050.64 IOPS, 27.54 MiB/s [2024-10-17T19:31:07.659Z] 7074.75 IOPS, 27.64 MiB/s [2024-10-17T19:31:07.659Z] 7098.08 IOPS, 27.73 MiB/s [2024-10-17T19:31:07.659Z] 7116.68 IOPS, 27.80 MiB/s [2024-10-17T19:31:07.659Z] 7135.93 IOPS, 27.87 MiB/s [2024-10-17T19:31:07.659Z] 7138.11 IOPS, 27.88 MiB/s [2024-10-17T19:31:07.659Z] 7156.00 IOPS, 27.95 MiB/s [2024-10-17T19:31:07.659Z] Received shutdown signal, test time was about 56.032422 seconds 00:28:58.401 00:28:58.401 Latency(us) 00:28:58.401 [2024-10-17T19:31:07.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.401 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:58.401 Verification LBA range: start 0x0 length 0x4000 00:28:58.401 Nvme0n1 : 56.03 7155.18 27.95 0.00 0.00 17857.87 707.49 7046430.72 00:28:58.401 [2024-10-17T19:31:07.659Z] =================================================================================================================== 00:28:58.401 [2024-10-17T19:31:07.659Z] Total : 7155.18 27.95 0.00 0.00 17857.87 707.49 7046430.72 00:28:58.401 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:58.401 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:28:58.401 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:58.401 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:28:58.401 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:58.401 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:28:58.401 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:58.401 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:28:58.401 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:58.401 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:58.401 rmmod nvme_tcp 00:28:58.401 rmmod nvme_fabrics 00:28:58.401 rmmod nvme_keyring 00:28:58.401 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:58.401 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:28:58.401 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:28:58.401 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@515 -- # '[' -n 80913 ']' 00:28:58.401 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # killprocess 80913 00:28:58.401 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 80913 ']' 00:28:58.401 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 80913 00:28:58.401 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:28:58.401 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:58.401 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80913 00:28:58.401 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:58.401 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:58.401 killing process with pid 80913 00:28:58.401 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80913' 00:28:58.401 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 80913 00:28:58.401 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 80913 00:28:58.659 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:58.659 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:58.659 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:58.659 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:28:58.659 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@789 -- # iptables-save 00:28:58.659 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:58.659 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:28:58.659 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:58.659 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:58.659 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:58.659 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:58.659 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:58.659 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:58.659 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:58.659 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:58.659 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:58.659 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:58.659 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:58.659 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:58.659 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:58.917 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:58.917 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:58.917 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:58.917 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.917 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.917 19:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.917 19:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:28:58.917 00:28:58.917 real 1m1.572s 00:28:58.917 user 2m50.205s 00:28:58.917 sys 0m19.282s 00:28:58.917 ************************************ 00:28:58.917 END TEST nvmf_host_multipath 00:28:58.917 ************************************ 00:28:58.917 19:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:58.917 19:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:58.917 19:31:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:58.917 19:31:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:58.917 19:31:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:58.917 19:31:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.917 ************************************ 00:28:58.917 START TEST nvmf_timeout 00:28:58.917 ************************************ 00:28:58.917 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:58.917 * Looking for test storage... 00:28:58.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:58.917 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:58.917 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:28:58.917 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:59.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.176 --rc genhtml_branch_coverage=1 00:28:59.176 --rc genhtml_function_coverage=1 00:28:59.176 --rc genhtml_legend=1 00:28:59.176 --rc geninfo_all_blocks=1 00:28:59.176 --rc geninfo_unexecuted_blocks=1 00:28:59.176 00:28:59.176 ' 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:59.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.176 --rc genhtml_branch_coverage=1 00:28:59.176 --rc genhtml_function_coverage=1 00:28:59.176 --rc genhtml_legend=1 00:28:59.176 --rc geninfo_all_blocks=1 00:28:59.176 --rc geninfo_unexecuted_blocks=1 00:28:59.176 00:28:59.176 ' 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:59.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.176 --rc genhtml_branch_coverage=1 00:28:59.176 --rc genhtml_function_coverage=1 00:28:59.176 --rc genhtml_legend=1 00:28:59.176 --rc geninfo_all_blocks=1 00:28:59.176 --rc geninfo_unexecuted_blocks=1 00:28:59.176 00:28:59.176 ' 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:59.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.176 --rc genhtml_branch_coverage=1 00:28:59.176 --rc genhtml_function_coverage=1 00:28:59.176 --rc genhtml_legend=1 00:28:59.176 --rc geninfo_all_blocks=1 00:28:59.176 --rc geninfo_unexecuted_blocks=1 00:28:59.176 00:28:59.176 ' 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:59.176 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:59.176 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@458 -- # nvmf_veth_init 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:59.177 Cannot find device "nvmf_init_br" 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:59.177 Cannot find device "nvmf_init_br2" 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:59.177 Cannot find device "nvmf_tgt_br" 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:59.177 Cannot find device "nvmf_tgt_br2" 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:59.177 Cannot find device "nvmf_init_br" 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:59.177 Cannot find device "nvmf_init_br2" 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:59.177 Cannot find device "nvmf_tgt_br" 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:59.177 Cannot find device "nvmf_tgt_br2" 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:59.177 Cannot find device "nvmf_br" 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:59.177 Cannot find device "nvmf_init_if" 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:59.177 Cannot find device "nvmf_init_if2" 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:59.177 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:59.177 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:59.177 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:59.435 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:59.435 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:59.435 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:28:59.435 00:28:59.435 --- 10.0.0.3 ping statistics --- 00:28:59.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.435 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:28:59.436 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:59.436 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:59.436 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:28:59.436 00:28:59.436 --- 10.0.0.4 ping statistics --- 00:28:59.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.436 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:28:59.436 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:59.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:28:59.436 00:28:59.436 --- 10.0.0.1 ping statistics --- 00:28:59.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.436 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:28:59.436 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:59.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:28:59.436 00:28:59.436 --- 10.0.0.2 ping statistics --- 00:28:59.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.436 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:28:59.436 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.436 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # return 0 00:28:59.436 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:59.436 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:59.436 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:59.436 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:59.436 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:59.436 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:59.436 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:59.436 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:28:59.436 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:59.436 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:59.436 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:59.436 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # nvmfpid=82133 00:28:59.436 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:59.436 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # waitforlisten 82133 00:28:59.436 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 82133 ']' 00:28:59.436 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.436 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:59.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.436 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.436 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:59.436 19:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:59.698 [2024-10-17 19:31:08.744554] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:28:59.698 [2024-10-17 19:31:08.744693] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.698 [2024-10-17 19:31:08.889281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:59.956 [2024-10-17 19:31:08.963759] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.956 [2024-10-17 19:31:08.963838] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.956 [2024-10-17 19:31:08.963854] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.956 [2024-10-17 19:31:08.963865] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.956 [2024-10-17 19:31:08.963874] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.956 [2024-10-17 19:31:08.965213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.956 [2024-10-17 19:31:08.965227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.956 [2024-10-17 19:31:09.025017] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:59.956 19:31:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:59.956 19:31:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:28:59.956 19:31:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:59.956 19:31:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:59.956 19:31:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:59.956 19:31:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.956 19:31:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:59.956 19:31:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:00.215 [2024-10-17 19:31:09.458630] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.473 19:31:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:00.731 Malloc0 00:29:00.731 19:31:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:00.988 19:31:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:01.554 19:31:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:01.554 [2024-10-17 19:31:10.804109] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:01.812 19:31:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82179 00:29:01.812 19:31:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:29:01.812 19:31:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82179 /var/tmp/bdevperf.sock 00:29:01.812 19:31:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 82179 ']' 00:29:01.812 19:31:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:01.812 19:31:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:01.812 19:31:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:01.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:01.812 19:31:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:01.812 19:31:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:01.812 [2024-10-17 19:31:10.893290] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:29:01.812 [2024-10-17 19:31:10.893426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82179 ] 00:29:01.812 [2024-10-17 19:31:11.033631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.070 [2024-10-17 19:31:11.106167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:02.070 [2024-10-17 19:31:11.166877] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:02.070 19:31:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:02.070 19:31:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:29:02.070 19:31:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:29:02.340 19:31:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:29:02.919 NVMe0n1 00:29:02.919 19:31:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82191 00:29:02.919 19:31:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:02.919 19:31:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:29:02.919 Running I/O for 10 seconds... 00:29:03.853 19:31:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:04.113 7055.00 IOPS, 27.56 MiB/s [2024-10-17T19:31:13.371Z] [2024-10-17 19:31:13.300078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:69688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.113 [2024-10-17 19:31:13.300162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.113 [2024-10-17 19:31:13.300200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:69696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.113 [2024-10-17 19:31:13.300215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.113 [2024-10-17 19:31:13.300228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:69704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.113 [2024-10-17 19:31:13.300240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.113 [2024-10-17 19:31:13.300252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:69712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.113 [2024-10-17 19:31:13.300262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.113 [2024-10-17 19:31:13.300273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:69720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.113 [2024-10-17 19:31:13.300283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.113 [2024-10-17 19:31:13.300296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:69728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.113 [2024-10-17 19:31:13.300306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.113 [2024-10-17 19:31:13.300318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:69736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.113 [2024-10-17 19:31:13.300328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.113 [2024-10-17 19:31:13.300339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:69744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.113 [2024-10-17 19:31:13.300349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.113 [2024-10-17 19:31:13.300361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.113 [2024-10-17 19:31:13.300371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.113 [2024-10-17 19:31:13.300383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:69248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.113 [2024-10-17 19:31:13.300392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.113 [2024-10-17 19:31:13.300403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.113 [2024-10-17 19:31:13.300412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.113 [2024-10-17 19:31:13.300424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:69264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.113 [2024-10-17 19:31:13.300434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.113 [2024-10-17 19:31:13.300445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:69272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.113 [2024-10-17 19:31:13.300454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.113 [2024-10-17 19:31:13.300466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:69280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.113 [2024-10-17 19:31:13.300476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.113 [2024-10-17 19:31:13.300487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.113 [2024-10-17 19:31:13.300499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.113 [2024-10-17 19:31:13.300510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:69296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.113 [2024-10-17 19:31:13.300521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.113 [2024-10-17 19:31:13.300533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:69304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.113 [2024-10-17 19:31:13.300543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.113 [2024-10-17 19:31:13.300555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.113 [2024-10-17 19:31:13.300565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.113 [2024-10-17 19:31:13.300577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.113 [2024-10-17 19:31:13.300586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.113 [2024-10-17 19:31:13.300598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:69328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.113 [2024-10-17 19:31:13.300607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.113 [2024-10-17 19:31:13.300618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:69336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.113 [2024-10-17 19:31:13.300627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.113 [2024-10-17 19:31:13.300638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:69344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.113 [2024-10-17 19:31:13.300647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.113 [2024-10-17 19:31:13.300658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:69352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.113 [2024-10-17 19:31:13.300667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.113 [2024-10-17 19:31:13.300679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:69360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.113 [2024-10-17 19:31:13.300688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.113 [2024-10-17 19:31:13.300699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:69368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.113 [2024-10-17 19:31:13.300708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.113 [2024-10-17 19:31:13.300719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:69376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.113 [2024-10-17 19:31:13.300727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.113 [2024-10-17 19:31:13.300739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:69384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.113 [2024-10-17 19:31:13.300747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.113 [2024-10-17 19:31:13.300759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:69392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.114 [2024-10-17 19:31:13.300768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.300779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:69400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.114 [2024-10-17 19:31:13.300788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.300799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:69408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.114 [2024-10-17 19:31:13.300813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.300824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.114 [2024-10-17 19:31:13.300834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.300845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:69424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.114 [2024-10-17 19:31:13.300855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.300867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:69752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.114 [2024-10-17 19:31:13.300877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.300888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:69760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.114 [2024-10-17 19:31:13.300897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.300910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:69768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.114 [2024-10-17 19:31:13.300935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.300954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:69776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.114 [2024-10-17 19:31:13.300964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.300975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:69784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.114 [2024-10-17 19:31:13.300984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.300996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.114 [2024-10-17 19:31:13.301005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:69800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.114 [2024-10-17 19:31:13.301026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:69808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.114 [2024-10-17 19:31:13.301046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:69816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.114 [2024-10-17 19:31:13.301066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.114 [2024-10-17 19:31:13.301087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:69832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.114 [2024-10-17 19:31:13.301109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:69840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.114 [2024-10-17 19:31:13.301145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:69848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.114 [2024-10-17 19:31:13.301168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:69856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.114 [2024-10-17 19:31:13.301189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.114 [2024-10-17 19:31:13.301211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.114 [2024-10-17 19:31:13.301233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:69432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.114 [2024-10-17 19:31:13.301256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:69440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.114 [2024-10-17 19:31:13.301276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.114 [2024-10-17 19:31:13.301297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:69456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.114 [2024-10-17 19:31:13.301317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:69464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.114 [2024-10-17 19:31:13.301338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:69472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.114 [2024-10-17 19:31:13.301358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.114 [2024-10-17 19:31:13.301384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:69488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.114 [2024-10-17 19:31:13.301404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.114 [2024-10-17 19:31:13.301424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:69504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.114 [2024-10-17 19:31:13.301445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:69512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.114 [2024-10-17 19:31:13.301465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:69520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.114 [2024-10-17 19:31:13.301494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.114 [2024-10-17 19:31:13.301516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:69536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.114 [2024-10-17 19:31:13.301537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:69544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.114 [2024-10-17 19:31:13.301559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:69552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.114 [2024-10-17 19:31:13.301581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:69880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.114 [2024-10-17 19:31:13.301602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:69888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.114 [2024-10-17 19:31:13.301623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:69896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.114 [2024-10-17 19:31:13.301643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:69904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.114 [2024-10-17 19:31:13.301663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:69912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.114 [2024-10-17 19:31:13.301684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.114 [2024-10-17 19:31:13.301696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.115 [2024-10-17 19:31:13.301705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.301716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:69928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.115 [2024-10-17 19:31:13.301726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.301737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:69936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.115 [2024-10-17 19:31:13.301746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.301757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:69560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.115 [2024-10-17 19:31:13.301767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.301779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:69568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.115 [2024-10-17 19:31:13.301788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.301799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:69576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.115 [2024-10-17 19:31:13.301808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.301819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:69584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.115 [2024-10-17 19:31:13.301828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.301839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:69592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.115 [2024-10-17 19:31:13.301848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.301859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.115 [2024-10-17 19:31:13.301868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.301879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:69608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.115 [2024-10-17 19:31:13.301897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.301920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.115 [2024-10-17 19:31:13.301937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.301951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.115 [2024-10-17 19:31:13.301961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.301973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.115 [2024-10-17 19:31:13.301982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.301994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:69960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.115 [2024-10-17 19:31:13.302004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.302015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:69968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.115 [2024-10-17 19:31:13.302025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.302047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:69976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.115 [2024-10-17 19:31:13.302060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.302072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:69984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.115 [2024-10-17 19:31:13.302082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.302093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:69992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.115 [2024-10-17 19:31:13.302102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.302114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.115 [2024-10-17 19:31:13.302123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.302147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.115 [2024-10-17 19:31:13.302157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.302169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.115 [2024-10-17 19:31:13.302179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.302190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.115 [2024-10-17 19:31:13.302200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.302211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.115 [2024-10-17 19:31:13.302220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.302232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.115 [2024-10-17 19:31:13.302241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.302252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:70048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.115 [2024-10-17 19:31:13.302261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.302273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.115 [2024-10-17 19:31:13.302288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.302300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.115 [2024-10-17 19:31:13.302310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.302322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.115 [2024-10-17 19:31:13.302332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.302343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:70080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.115 [2024-10-17 19:31:13.302353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.302365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.115 [2024-10-17 19:31:13.302375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.302387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.115 [2024-10-17 19:31:13.302396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.302407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:70104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.115 [2024-10-17 19:31:13.302417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.302428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.115 [2024-10-17 19:31:13.302437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.302448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.115 [2024-10-17 19:31:13.302456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.302468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.115 [2024-10-17 19:31:13.302477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.302488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:69624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.115 [2024-10-17 19:31:13.302497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.302508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.115 [2024-10-17 19:31:13.302517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.302528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:69640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.115 [2024-10-17 19:31:13.302536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.302548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:69648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.115 [2024-10-17 19:31:13.302557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.302569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:69656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.115 [2024-10-17 19:31:13.302578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.302589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:69664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.115 [2024-10-17 19:31:13.302599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.115 [2024-10-17 19:31:13.302610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:69672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.115 [2024-10-17 19:31:13.302631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.116 [2024-10-17 19:31:13.302642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe835e0 is same with the state(6) to be set 00:29:04.116 [2024-10-17 19:31:13.302655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.116 [2024-10-17 19:31:13.302663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.116 [2024-10-17 19:31:13.302672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69680 len:8 PRP1 0x0 PRP2 0x0 00:29:04.116 [2024-10-17 19:31:13.302681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.116 [2024-10-17 19:31:13.302691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.116 [2024-10-17 19:31:13.302700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.116 [2024-10-17 19:31:13.302708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70136 len:8 PRP1 0x0 PRP2 0x0 00:29:04.116 [2024-10-17 19:31:13.302718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.116 [2024-10-17 19:31:13.302729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.116 [2024-10-17 19:31:13.302736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.116 [2024-10-17 19:31:13.302744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70144 len:8 PRP1 0x0 PRP2 0x0 00:29:04.116 [2024-10-17 19:31:13.302754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.116 [2024-10-17 19:31:13.302763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.116 [2024-10-17 19:31:13.302771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.116 [2024-10-17 19:31:13.302779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70152 len:8 PRP1 0x0 PRP2 0x0 00:29:04.116 [2024-10-17 19:31:13.302787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.116 [2024-10-17 19:31:13.302797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.116 [2024-10-17 19:31:13.302804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.116 [2024-10-17 19:31:13.302812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70160 len:8 PRP1 0x0 PRP2 0x0 00:29:04.116 [2024-10-17 19:31:13.302821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.116 [2024-10-17 19:31:13.302830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.116 [2024-10-17 19:31:13.302837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.116 [2024-10-17 19:31:13.302844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70168 len:8 PRP1 0x0 PRP2 0x0 00:29:04.116 [2024-10-17 19:31:13.302853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.116 [2024-10-17 19:31:13.302862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.116 [2024-10-17 19:31:13.302869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.116 [2024-10-17 19:31:13.302877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70176 len:8 PRP1 0x0 PRP2 0x0 00:29:04.116 [2024-10-17 19:31:13.302886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.116 [2024-10-17 19:31:13.302896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.116 [2024-10-17 19:31:13.302903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.116 [2024-10-17 19:31:13.302926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70184 len:8 PRP1 0x0 PRP2 0x0 00:29:04.116 [2024-10-17 19:31:13.302943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.116 [2024-10-17 19:31:13.302958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.116 [2024-10-17 19:31:13.302966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.116 [2024-10-17 19:31:13.302974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70192 len:8 PRP1 0x0 PRP2 0x0 00:29:04.116 [2024-10-17 19:31:13.302984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.116 [2024-10-17 19:31:13.302993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.116 [2024-10-17 19:31:13.303001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.116 [2024-10-17 19:31:13.303009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70200 len:8 PRP1 0x0 PRP2 0x0 00:29:04.116 [2024-10-17 19:31:13.303023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.116 [2024-10-17 19:31:13.303032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.116 [2024-10-17 19:31:13.303040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.116 [2024-10-17 19:31:13.303048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70208 len:8 PRP1 0x0 PRP2 0x0 00:29:04.116 [2024-10-17 19:31:13.303057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.116 [2024-10-17 19:31:13.303067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.116 [2024-10-17 19:31:13.303075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.116 [2024-10-17 19:31:13.303083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70216 len:8 PRP1 0x0 PRP2 0x0 00:29:04.116 [2024-10-17 19:31:13.303092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.116 [2024-10-17 19:31:13.303101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.116 [2024-10-17 19:31:13.303109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.116 [2024-10-17 19:31:13.303117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70224 len:8 PRP1 0x0 PRP2 0x0 00:29:04.116 [2024-10-17 19:31:13.303126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.116 [2024-10-17 19:31:13.303149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.116 [2024-10-17 19:31:13.303157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.116 [2024-10-17 19:31:13.303166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70232 len:8 PRP1 0x0 PRP2 0x0 00:29:04.116 [2024-10-17 19:31:13.303175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.116 [2024-10-17 19:31:13.303184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.116 [2024-10-17 19:31:13.303191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.116 [2024-10-17 19:31:13.303198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70240 len:8 PRP1 0x0 PRP2 0x0 00:29:04.116 [2024-10-17 19:31:13.303208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.116 [2024-10-17 19:31:13.303217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.116 [2024-10-17 19:31:13.303224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.116 [2024-10-17 19:31:13.303242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70248 len:8 PRP1 0x0 PRP2 0x0 00:29:04.116 [2024-10-17 19:31:13.303253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.116 [2024-10-17 19:31:13.303262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.116 [2024-10-17 19:31:13.303270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.116 [2024-10-17 19:31:13.303278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70256 len:8 PRP1 0x0 PRP2 0x0 00:29:04.116 [2024-10-17 19:31:13.303287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.116 [2024-10-17 19:31:13.303359] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe835e0 was disconnected and freed. reset controller. 00:29:04.116 [2024-10-17 19:31:13.303651] nvme_ctrlr.c:1770:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.116 [2024-10-17 19:31:13.303772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe152e0 (9): Bad file descriptor 00:29:04.116 [2024-10-17 19:31:13.303897] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.116 [2024-10-17 19:31:13.303925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe152e0 with addr=10.0.0.3, port=4420 00:29:04.116 [2024-10-17 19:31:13.303944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe152e0 is same with the state(6) to be set 00:29:04.116 [2024-10-17 19:31:13.303972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe152e0 (9): Bad file descriptor 00:29:04.116 [2024-10-17 19:31:13.303995] nvme_ctrlr.c:4250:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.116 [2024-10-17 19:31:13.304009] nvme_ctrlr.c:1868:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.116 [2024-10-17 19:31:13.304027] nvme_ctrlr.c:1152:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.116 [2024-10-17 19:31:13.304057] bdev_nvme.c:2213:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.116 [2024-10-17 19:31:13.304071] nvme_ctrlr.c:1770:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.116 19:31:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:29:06.019 4327.50 IOPS, 16.90 MiB/s [2024-10-17T19:31:15.535Z] 2885.00 IOPS, 11.27 MiB/s [2024-10-17T19:31:15.535Z] [2024-10-17 19:31:15.304384] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.277 [2024-10-17 19:31:15.304478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe152e0 with addr=10.0.0.3, port=4420 00:29:06.277 [2024-10-17 19:31:15.304501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe152e0 is same with the state(6) to be set 00:29:06.277 [2024-10-17 19:31:15.304533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe152e0 (9): Bad file descriptor 00:29:06.277 [2024-10-17 19:31:15.304555] nvme_ctrlr.c:4250:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.277 [2024-10-17 19:31:15.304567] nvme_ctrlr.c:1868:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.277 [2024-10-17 19:31:15.304578] nvme_ctrlr.c:1152:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.277 [2024-10-17 19:31:15.304613] bdev_nvme.c:2213:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.277 [2024-10-17 19:31:15.304625] nvme_ctrlr.c:1770:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.277 19:31:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:29:06.277 19:31:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:06.277 19:31:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:29:06.535 19:31:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:29:06.535 19:31:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:29:06.535 19:31:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:29:06.535 19:31:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:29:06.792 19:31:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:29:06.792 19:31:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:29:07.997 2163.75 IOPS, 8.45 MiB/s [2024-10-17T19:31:17.512Z] 1731.00 IOPS, 6.76 MiB/s [2024-10-17T19:31:17.512Z] [2024-10-17 19:31:17.304956] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.254 [2024-10-17 19:31:17.305050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe152e0 with addr=10.0.0.3, port=4420 00:29:08.254 [2024-10-17 19:31:17.305069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe152e0 is same with the state(6) to be set 00:29:08.254 [2024-10-17 19:31:17.305102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe152e0 (9): Bad file descriptor 00:29:08.254 [2024-10-17 19:31:17.305125] nvme_ctrlr.c:4250:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.254 [2024-10-17 19:31:17.305165] nvme_ctrlr.c:1868:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.254 [2024-10-17 19:31:17.305178] nvme_ctrlr.c:1152:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.254 [2024-10-17 19:31:17.305213] bdev_nvme.c:2213:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.254 [2024-10-17 19:31:17.305227] nvme_ctrlr.c:1770:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.144 1442.50 IOPS, 5.63 MiB/s [2024-10-17T19:31:19.402Z] 1236.43 IOPS, 4.83 MiB/s [2024-10-17T19:31:19.402Z] [2024-10-17 19:31:19.305344] nvme_ctrlr.c:1152:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.144 [2024-10-17 19:31:19.305438] nvme_ctrlr.c:4250:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.144 [2024-10-17 19:31:19.305452] nvme_ctrlr.c:1868:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.144 [2024-10-17 19:31:19.305465] nvme_ctrlr.c:1140:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:29:10.144 [2024-10-17 19:31:19.305502] bdev_nvme.c:2213:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.075 1081.88 IOPS, 4.23 MiB/s 00:29:11.075 Latency(us) 00:29:11.075 [2024-10-17T19:31:20.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.075 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:11.075 Verification LBA range: start 0x0 length 0x4000 00:29:11.075 NVMe0n1 : 8.18 1057.74 4.13 15.64 0.00 119058.56 3753.43 7015926.69 00:29:11.075 [2024-10-17T19:31:20.333Z] =================================================================================================================== 00:29:11.075 [2024-10-17T19:31:20.333Z] Total : 1057.74 4.13 15.64 0.00 119058.56 3753.43 7015926.69 00:29:11.075 { 00:29:11.075 "results": [ 00:29:11.075 { 00:29:11.075 "job": "NVMe0n1", 00:29:11.075 "core_mask": "0x4", 00:29:11.075 "workload": "verify", 00:29:11.075 "status": "finished", 00:29:11.075 "verify_range": { 00:29:11.075 "start": 0, 00:29:11.075 "length": 16384 00:29:11.075 }, 00:29:11.075 "queue_depth": 128, 00:29:11.075 "io_size": 4096, 00:29:11.075 "runtime": 8.182536, 00:29:11.075 "iops": 1057.7405342304635, 00:29:11.075 "mibps": 4.131798961837748, 00:29:11.075 "io_failed": 128, 00:29:11.075 "io_timeout": 0, 00:29:11.075 "avg_latency_us": 119058.5638090112, 00:29:11.075 "min_latency_us": 3753.4254545454546, 00:29:11.075 "max_latency_us": 7015926.69090909 00:29:11.075 } 00:29:11.075 ], 00:29:11.075 "core_count": 1 00:29:11.075 } 00:29:12.006 19:31:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:29:12.006 19:31:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:12.006 19:31:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:29:12.263 19:31:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:29:12.263 19:31:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:29:12.263 19:31:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:29:12.263 19:31:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:29:12.521 19:31:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:29:12.521 19:31:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 82191 00:29:12.521 19:31:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82179 00:29:12.521 19:31:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 82179 ']' 00:29:12.521 19:31:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 82179 00:29:12.521 19:31:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:29:12.521 19:31:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:12.521 19:31:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82179 00:29:12.521 19:31:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:29:12.521 19:31:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:29:12.521 killing process with pid 82179 00:29:12.521 19:31:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82179' 00:29:12.521 Received shutdown signal, test time was about 9.577747 seconds 00:29:12.521 00:29:12.521 Latency(us) 00:29:12.521 [2024-10-17T19:31:21.779Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.521 [2024-10-17T19:31:21.779Z] =================================================================================================================== 00:29:12.521 [2024-10-17T19:31:21.779Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:12.521 19:31:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 82179 00:29:12.521 19:31:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 82179 00:29:12.779 19:31:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:13.037 [2024-10-17 19:31:22.165993] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:13.037 19:31:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82318 00:29:13.037 19:31:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:29:13.037 19:31:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82318 /var/tmp/bdevperf.sock 00:29:13.037 19:31:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 82318 ']' 00:29:13.037 19:31:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:13.037 19:31:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:13.037 19:31:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:13.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:13.037 19:31:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:13.037 19:31:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:13.037 [2024-10-17 19:31:22.249756] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:29:13.037 [2024-10-17 19:31:22.249891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82318 ] 00:29:13.295 [2024-10-17 19:31:22.389971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.296 [2024-10-17 19:31:22.456397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:13.296 [2024-10-17 19:31:22.514063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:13.554 19:31:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:13.554 19:31:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:29:13.554 19:31:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:29:13.811 19:31:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:29:14.069 NVMe0n1 00:29:14.069 19:31:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82330 00:29:14.069 19:31:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:14.069 19:31:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:29:14.327 Running I/O for 10 seconds... 00:29:15.260 19:31:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:15.523 6677.00 IOPS, 26.08 MiB/s [2024-10-17T19:31:24.781Z] [2024-10-17 19:31:24.559773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.559846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.559858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.559867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.559876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.559885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.559894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.559902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.559910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.559919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.559928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.559938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.559946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.559954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.559962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.559970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.559979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.559987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.559995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.523 [2024-10-17 19:31:24.560268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.560595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346e50 is same with the state(6) to be set 00:29:15.524 [2024-10-17 19:31:24.561192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.524 [2024-10-17 19:31:24.561239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.524 [2024-10-17 19:31:24.561266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.524 [2024-10-17 19:31:24.561277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.524 [2024-10-17 19:31:24.561290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.524 [2024-10-17 19:31:24.561299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.524 [2024-10-17 19:31:24.561311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.524 [2024-10-17 19:31:24.561320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.524 [2024-10-17 19:31:24.561332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.525 [2024-10-17 19:31:24.561341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.525 [2024-10-17 19:31:24.561353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.525 [2024-10-17 19:31:24.561362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.525 [2024-10-17 19:31:24.561374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.525 [2024-10-17 19:31:24.561383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.525 [2024-10-17 19:31:24.561394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.525 [2024-10-17 19:31:24.561403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.525 [2024-10-17 19:31:24.561415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.525 [2024-10-17 19:31:24.561424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.525 [2024-10-17 19:31:24.561435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.525 [2024-10-17 19:31:24.561445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.525 [2024-10-17 19:31:24.561456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.525 [2024-10-17 19:31:24.561465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.525 [2024-10-17 19:31:24.561476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.525 [2024-10-17 19:31:24.561485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.525 [2024-10-17 19:31:24.561497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.525 [2024-10-17 19:31:24.561506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.525 [2024-10-17 19:31:24.561517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.525 [2024-10-17 19:31:24.561526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.525 [2024-10-17 19:31:24.561537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.525 [2024-10-17 19:31:24.561547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.525 [2024-10-17 19:31:24.561558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.525 [2024-10-17 19:31:24.561567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.525 [2024-10-17 19:31:24.561578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.525 [2024-10-17 19:31:24.561590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.525 [2024-10-17 19:31:24.561601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.525 [2024-10-17 19:31:24.561611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.525 [2024-10-17 19:31:24.561623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.525 [2024-10-17 19:31:24.561632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.525 [2024-10-17 19:31:24.561644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.525 [2024-10-17 19:31:24.561653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.525 [2024-10-17 19:31:24.561664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.525 [2024-10-17 19:31:24.561673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.525 [2024-10-17 19:31:24.561684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.525 [2024-10-17 19:31:24.561694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.525 [2024-10-17 19:31:24.561705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.525 [2024-10-17 19:31:24.561714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.525 [2024-10-17 19:31:24.561725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.525 [2024-10-17 19:31:24.561735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.525 [2024-10-17 19:31:24.561746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.525 [2024-10-17 19:31:24.561755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.525 [2024-10-17 19:31:24.561767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.525 [2024-10-17 19:31:24.561776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.525 [2024-10-17 19:31:24.561787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.525 [2024-10-17 19:31:24.561797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.525 [2024-10-17 19:31:24.561808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.525 [2024-10-17 19:31:24.561817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.525 [2024-10-17 19:31:24.561828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.525 [2024-10-17 19:31:24.561837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.526 [2024-10-17 19:31:24.561847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.526 [2024-10-17 19:31:24.561857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.526 [2024-10-17 19:31:24.561868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.526 [2024-10-17 19:31:24.561877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.526 [2024-10-17 19:31:24.561888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.526 [2024-10-17 19:31:24.561897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.526 [2024-10-17 19:31:24.561908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.526 [2024-10-17 19:31:24.561920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.526 [2024-10-17 19:31:24.561931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.526 [2024-10-17 19:31:24.561941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.526 [2024-10-17 19:31:24.561952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.526 [2024-10-17 19:31:24.561962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.526 [2024-10-17 19:31:24.561973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.526 [2024-10-17 19:31:24.561982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.526 [2024-10-17 19:31:24.561993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.526 [2024-10-17 19:31:24.562002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.526 [2024-10-17 19:31:24.562014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.526 [2024-10-17 19:31:24.562023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.526 [2024-10-17 19:31:24.562034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.526 [2024-10-17 19:31:24.562053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.526 [2024-10-17 19:31:24.562065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.526 [2024-10-17 19:31:24.562074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.526 [2024-10-17 19:31:24.562086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.526 [2024-10-17 19:31:24.562095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.526 [2024-10-17 19:31:24.562106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.526 [2024-10-17 19:31:24.562115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.526 [2024-10-17 19:31:24.562126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.526 [2024-10-17 19:31:24.562160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.526 [2024-10-17 19:31:24.562173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.526 [2024-10-17 19:31:24.562182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.526 [2024-10-17 19:31:24.562193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.526 [2024-10-17 19:31:24.562203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.526 [2024-10-17 19:31:24.562214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.526 [2024-10-17 19:31:24.562224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.526 [2024-10-17 19:31:24.562235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.526 [2024-10-17 19:31:24.562244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.526 [2024-10-17 19:31:24.562255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.527 [2024-10-17 19:31:24.562265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.527 [2024-10-17 19:31:24.562276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.527 [2024-10-17 19:31:24.562286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.527 [2024-10-17 19:31:24.562298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.527 [2024-10-17 19:31:24.562307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.527 [2024-10-17 19:31:24.562318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.527 [2024-10-17 19:31:24.562327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.527 [2024-10-17 19:31:24.562339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.527 [2024-10-17 19:31:24.562348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.527 [2024-10-17 19:31:24.562360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.527 [2024-10-17 19:31:24.562370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.527 [2024-10-17 19:31:24.562381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.527 [2024-10-17 19:31:24.562390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.527 [2024-10-17 19:31:24.562401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.527 [2024-10-17 19:31:24.562410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.527 [2024-10-17 19:31:24.562421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.527 [2024-10-17 19:31:24.562430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.527 [2024-10-17 19:31:24.562441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.527 [2024-10-17 19:31:24.562451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.527 [2024-10-17 19:31:24.562462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.527 [2024-10-17 19:31:24.562471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.527 [2024-10-17 19:31:24.562482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.527 [2024-10-17 19:31:24.562491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.527 [2024-10-17 19:31:24.562502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.527 [2024-10-17 19:31:24.562511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.527 [2024-10-17 19:31:24.562523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.527 [2024-10-17 19:31:24.562532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.527 [2024-10-17 19:31:24.562544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.527 [2024-10-17 19:31:24.562553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.527 [2024-10-17 19:31:24.562565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.527 [2024-10-17 19:31:24.562574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.527 [2024-10-17 19:31:24.562585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.527 [2024-10-17 19:31:24.562594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.527 [2024-10-17 19:31:24.562605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.527 [2024-10-17 19:31:24.562615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.527 [2024-10-17 19:31:24.562627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.527 [2024-10-17 19:31:24.562637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.527 [2024-10-17 19:31:24.562648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.527 [2024-10-17 19:31:24.562658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.527 [2024-10-17 19:31:24.562669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.527 [2024-10-17 19:31:24.562678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.528 [2024-10-17 19:31:24.562689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.528 [2024-10-17 19:31:24.562698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.528 [2024-10-17 19:31:24.562709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.528 [2024-10-17 19:31:24.562718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.528 [2024-10-17 19:31:24.562729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.528 [2024-10-17 19:31:24.562739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.528 [2024-10-17 19:31:24.562749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.528 [2024-10-17 19:31:24.562758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.528 [2024-10-17 19:31:24.562770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.528 [2024-10-17 19:31:24.562779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.528 [2024-10-17 19:31:24.562790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.528 [2024-10-17 19:31:24.562799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.528 [2024-10-17 19:31:24.562810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.528 [2024-10-17 19:31:24.562819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.528 [2024-10-17 19:31:24.562830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.528 [2024-10-17 19:31:24.562840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.528 [2024-10-17 19:31:24.562851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.528 [2024-10-17 19:31:24.562860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.528 [2024-10-17 19:31:24.562871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.528 [2024-10-17 19:31:24.562881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.528 [2024-10-17 19:31:24.562892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.528 [2024-10-17 19:31:24.562901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.528 [2024-10-17 19:31:24.562913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.528 [2024-10-17 19:31:24.562922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.528 [2024-10-17 19:31:24.562933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.528 [2024-10-17 19:31:24.562943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.528 [2024-10-17 19:31:24.562955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.528 [2024-10-17 19:31:24.562964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.528 [2024-10-17 19:31:24.562975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.528 [2024-10-17 19:31:24.562985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.528 [2024-10-17 19:31:24.562996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.528 [2024-10-17 19:31:24.563006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.528 [2024-10-17 19:31:24.563017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.528 [2024-10-17 19:31:24.563026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.528 [2024-10-17 19:31:24.563037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.528 [2024-10-17 19:31:24.563047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.528 [2024-10-17 19:31:24.563058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.528 [2024-10-17 19:31:24.563067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.528 [2024-10-17 19:31:24.563078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.528 [2024-10-17 19:31:24.563087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.528 [2024-10-17 19:31:24.563099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.528 [2024-10-17 19:31:24.563108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.528 [2024-10-17 19:31:24.563118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.528 [2024-10-17 19:31:24.563137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.528 [2024-10-17 19:31:24.563150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.528 [2024-10-17 19:31:24.563159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.528 [2024-10-17 19:31:24.563170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.528 [2024-10-17 19:31:24.563179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.528 [2024-10-17 19:31:24.563192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.528 [2024-10-17 19:31:24.563201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.528 [2024-10-17 19:31:24.563212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.529 [2024-10-17 19:31:24.563222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.529 [2024-10-17 19:31:24.563233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.529 [2024-10-17 19:31:24.563242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.529 [2024-10-17 19:31:24.563264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.529 [2024-10-17 19:31:24.563273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.529 [2024-10-17 19:31:24.563284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.529 [2024-10-17 19:31:24.563304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.529 [2024-10-17 19:31:24.563316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.529 [2024-10-17 19:31:24.563326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.529 [2024-10-17 19:31:24.563337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.529 [2024-10-17 19:31:24.563346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.529 [2024-10-17 19:31:24.563357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.529 [2024-10-17 19:31:24.563366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.529 [2024-10-17 19:31:24.563377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.529 [2024-10-17 19:31:24.563386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.529 [2024-10-17 19:31:24.563397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.529 [2024-10-17 19:31:24.563407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.529 [2024-10-17 19:31:24.563418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.529 [2024-10-17 19:31:24.563427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.529 [2024-10-17 19:31:24.563438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.529 [2024-10-17 19:31:24.563447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.529 [2024-10-17 19:31:24.563458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.529 [2024-10-17 19:31:24.563467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.529 [2024-10-17 19:31:24.563479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.529 [2024-10-17 19:31:24.563488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.529 [2024-10-17 19:31:24.563499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.529 [2024-10-17 19:31:24.563508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.529 [2024-10-17 19:31:24.563519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.529 [2024-10-17 19:31:24.563529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.529 [2024-10-17 19:31:24.563539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b885e0 is same with the state(6) to be set 00:29:15.529 [2024-10-17 19:31:24.563552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:15.529 [2024-10-17 19:31:24.563560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:15.529 [2024-10-17 19:31:24.563569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63648 len:8 PRP1 0x0 PRP2 0x0 00:29:15.529 [2024-10-17 19:31:24.563578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.529 [2024-10-17 19:31:24.563589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:15.529 [2024-10-17 19:31:24.563596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:15.529 [2024-10-17 19:31:24.563604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63776 len:8 PRP1 0x0 PRP2 0x0 00:29:15.529 [2024-10-17 19:31:24.563613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.529 [2024-10-17 19:31:24.563628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:15.529 [2024-10-17 19:31:24.563635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:15.529 [2024-10-17 19:31:24.563644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63784 len:8 PRP1 0x0 PRP2 0x0 00:29:15.529 [2024-10-17 19:31:24.563653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.529 [2024-10-17 19:31:24.563662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:15.529 [2024-10-17 19:31:24.563669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:15.529 [2024-10-17 19:31:24.563677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63792 len:8 PRP1 0x0 PRP2 0x0 00:29:15.529 [2024-10-17 19:31:24.563686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.529 [2024-10-17 19:31:24.563695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:15.529 [2024-10-17 19:31:24.563703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:15.529 [2024-10-17 19:31:24.563710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63800 len:8 PRP1 0x0 PRP2 0x0 00:29:15.529 [2024-10-17 19:31:24.563719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.529 [2024-10-17 19:31:24.563728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:15.529 [2024-10-17 19:31:24.563735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:15.529 [2024-10-17 19:31:24.563742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63808 len:8 PRP1 0x0 PRP2 0x0 00:29:15.530 [2024-10-17 19:31:24.563750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.530 [2024-10-17 19:31:24.563759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:15.530 [2024-10-17 19:31:24.563766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:15.530 [2024-10-17 19:31:24.563774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63816 len:8 PRP1 0x0 PRP2 0x0 00:29:15.530 [2024-10-17 19:31:24.563783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.530 [2024-10-17 19:31:24.563803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:15.530 [2024-10-17 19:31:24.563811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:15.530 [2024-10-17 19:31:24.563819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63824 len:8 PRP1 0x0 PRP2 0x0 00:29:15.530 [2024-10-17 19:31:24.563828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.530 [2024-10-17 19:31:24.563837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:15.530 [2024-10-17 19:31:24.563845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:15.530 [2024-10-17 19:31:24.563852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63832 len:8 PRP1 0x0 PRP2 0x0 00:29:15.530 [2024-10-17 19:31:24.563861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.530 [2024-10-17 19:31:24.563870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:15.530 [2024-10-17 19:31:24.563878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:15.530 [2024-10-17 19:31:24.563885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63840 len:8 PRP1 0x0 PRP2 0x0 00:29:15.530 [2024-10-17 19:31:24.563895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.530 [2024-10-17 19:31:24.563909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:15.530 [2024-10-17 19:31:24.563917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:15.530 [2024-10-17 19:31:24.563925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63848 len:8 PRP1 0x0 PRP2 0x0 00:29:15.530 [2024-10-17 19:31:24.563934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.530 [2024-10-17 19:31:24.563944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:15.530 [2024-10-17 19:31:24.563951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:15.530 [2024-10-17 19:31:24.563959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63856 len:8 PRP1 0x0 PRP2 0x0 00:29:15.530 [2024-10-17 19:31:24.563968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.530 [2024-10-17 19:31:24.563977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:15.530 [2024-10-17 19:31:24.563984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:15.530 [2024-10-17 19:31:24.563992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63864 len:8 PRP1 0x0 PRP2 0x0 00:29:15.530 [2024-10-17 19:31:24.564000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.530 [2024-10-17 19:31:24.564009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:15.530 [2024-10-17 19:31:24.564017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:15.530 [2024-10-17 19:31:24.564024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63872 len:8 PRP1 0x0 PRP2 0x0 00:29:15.530 [2024-10-17 19:31:24.564033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.530 [2024-10-17 19:31:24.564041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:15.530 [2024-10-17 19:31:24.564049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:15.530 [2024-10-17 19:31:24.564056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63880 len:8 PRP1 0x0 PRP2 0x0 00:29:15.530 [2024-10-17 19:31:24.564065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.530 [2024-10-17 19:31:24.564080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:15.530 [2024-10-17 19:31:24.564087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:15.530 [2024-10-17 19:31:24.580986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63888 len:8 PRP1 0x0 PRP2 0x0 00:29:15.530 [2024-10-17 19:31:24.581049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.530 [2024-10-17 19:31:24.581079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:15.530 [2024-10-17 19:31:24.581091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:15.530 [2024-10-17 19:31:24.581102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63896 len:8 PRP1 0x0 PRP2 0x0 00:29:15.530 [2024-10-17 19:31:24.581116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.530 [2024-10-17 19:31:24.581148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:15.530 [2024-10-17 19:31:24.581161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:15.530 [2024-10-17 19:31:24.581172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63904 len:8 PRP1 0x0 PRP2 0x0 00:29:15.530 [2024-10-17 19:31:24.581184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.530 [2024-10-17 19:31:24.581199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:15.530 [2024-10-17 19:31:24.581209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:15.530 [2024-10-17 19:31:24.581220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63912 len:8 PRP1 0x0 PRP2 0x0 00:29:15.530 [2024-10-17 19:31:24.581233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.530 [2024-10-17 19:31:24.581245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:15.530 [2024-10-17 19:31:24.581255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:15.530 [2024-10-17 19:31:24.581266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63920 len:8 PRP1 0x0 PRP2 0x0 00:29:15.530 [2024-10-17 19:31:24.581278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.530 [2024-10-17 19:31:24.581370] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b885e0 was disconnected and freed. reset controller. 00:29:15.530 [2024-10-17 19:31:24.581557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.530 [2024-10-17 19:31:24.581589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.530 [2024-10-17 19:31:24.581608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.531 [2024-10-17 19:31:24.581621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.531 [2024-10-17 19:31:24.581635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.531 [2024-10-17 19:31:24.581647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.531 [2024-10-17 19:31:24.581660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.531 [2024-10-17 19:31:24.581672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.531 [2024-10-17 19:31:24.581684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a2e0 is same with the state(6) to be set 00:29:15.531 [2024-10-17 19:31:24.582001] nvme_ctrlr.c:1770:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.531 [2024-10-17 19:31:24.582053] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1a2e0 (9): Bad file descriptor 00:29:15.531 [2024-10-17 19:31:24.582221] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.531 [2024-10-17 19:31:24.582262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1a2e0 with addr=10.0.0.3, port=4420 00:29:15.531 [2024-10-17 19:31:24.582277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a2e0 is same with the state(6) to be set 00:29:15.531 [2024-10-17 19:31:24.582302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1a2e0 (9): Bad file descriptor 00:29:15.531 [2024-10-17 19:31:24.582323] nvme_ctrlr.c:4250:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.531 [2024-10-17 19:31:24.582335] nvme_ctrlr.c:1868:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.531 [2024-10-17 19:31:24.582349] nvme_ctrlr.c:1152:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.531 [2024-10-17 19:31:24.582378] bdev_nvme.c:2213:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.531 [2024-10-17 19:31:24.582394] nvme_ctrlr.c:1770:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.531 19:31:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:29:16.387 3931.50 IOPS, 15.36 MiB/s [2024-10-17T19:31:25.645Z] [2024-10-17 19:31:25.582602] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.387 [2024-10-17 19:31:25.582700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1a2e0 with addr=10.0.0.3, port=4420 00:29:16.387 [2024-10-17 19:31:25.582718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a2e0 is same with the state(6) to be set 00:29:16.387 [2024-10-17 19:31:25.582753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1a2e0 (9): Bad file descriptor 00:29:16.387 [2024-10-17 19:31:25.582775] nvme_ctrlr.c:4250:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.387 [2024-10-17 19:31:25.582787] nvme_ctrlr.c:1868:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.387 [2024-10-17 19:31:25.582798] nvme_ctrlr.c:1152:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.387 [2024-10-17 19:31:25.582833] bdev_nvme.c:2213:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.387 [2024-10-17 19:31:25.582848] nvme_ctrlr.c:1770:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.387 19:31:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:16.644 [2024-10-17 19:31:25.870308] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:16.644 19:31:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82330 00:29:17.468 2621.00 IOPS, 10.24 MiB/s [2024-10-17T19:31:26.726Z] [2024-10-17 19:31:26.594422] bdev_nvme.c:2215:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:19.338 1965.75 IOPS, 7.68 MiB/s [2024-10-17T19:31:29.528Z] 3025.20 IOPS, 11.82 MiB/s [2024-10-17T19:31:30.483Z] 4035.67 IOPS, 15.76 MiB/s [2024-10-17T19:31:31.416Z] 4723.14 IOPS, 18.45 MiB/s [2024-10-17T19:31:32.795Z] 5228.25 IOPS, 20.42 MiB/s [2024-10-17T19:31:33.729Z] 5634.44 IOPS, 22.01 MiB/s [2024-10-17T19:31:33.729Z] 5974.60 IOPS, 23.34 MiB/s 00:29:24.471 Latency(us) 00:29:24.471 [2024-10-17T19:31:33.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.471 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:24.471 Verification LBA range: start 0x0 length 0x4000 00:29:24.471 NVMe0n1 : 10.01 5979.86 23.36 0.00 0.00 21366.82 1377.75 3050402.91 00:29:24.471 [2024-10-17T19:31:33.729Z] =================================================================================================================== 00:29:24.471 [2024-10-17T19:31:33.729Z] Total : 5979.86 23.36 0.00 0.00 21366.82 1377.75 3050402.91 00:29:24.471 { 00:29:24.471 "results": [ 00:29:24.471 { 00:29:24.471 "job": "NVMe0n1", 00:29:24.471 "core_mask": "0x4", 00:29:24.471 "workload": "verify", 00:29:24.471 "status": "finished", 00:29:24.471 "verify_range": { 00:29:24.471 "start": 0, 00:29:24.471 "length": 16384 00:29:24.471 }, 00:29:24.471 "queue_depth": 128, 00:29:24.471 "io_size": 4096, 00:29:24.471 "runtime": 10.007932, 00:29:24.471 "iops": 5979.8567776040045, 00:29:24.471 "mibps": 23.358815537515643, 00:29:24.471 "io_failed": 0, 00:29:24.471 "io_timeout": 0, 00:29:24.471 "avg_latency_us": 21366.822645031338, 00:29:24.471 "min_latency_us": 1377.7454545454545, 00:29:24.471 "max_latency_us": 3050402.909090909 00:29:24.471 } 00:29:24.471 ], 00:29:24.471 "core_count": 1 00:29:24.471 } 00:29:24.471 19:31:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82439 00:29:24.471 19:31:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:24.471 19:31:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:29:24.471 Running I/O for 10 seconds... 00:29:25.407 19:31:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:25.668 6804.00 IOPS, 26.58 MiB/s [2024-10-17T19:31:34.926Z] [2024-10-17 19:31:34.678574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.668 [2024-10-17 19:31:34.678923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.678931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.678939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.678946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.678954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.678963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.678971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.678979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.678989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.678997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23443e0 is same with the state(6) to be set 00:29:25.669 [2024-10-17 19:31:34.679666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:57632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.669 [2024-10-17 19:31:34.679700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.669 [2024-10-17 19:31:34.679725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.669 [2024-10-17 19:31:34.679736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.669 [2024-10-17 19:31:34.679748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:57648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.669 [2024-10-17 19:31:34.679758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.669 [2024-10-17 19:31:34.679770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.669 [2024-10-17 19:31:34.679779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.669 [2024-10-17 19:31:34.679791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:57664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.669 [2024-10-17 19:31:34.679800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.679812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:57672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.679821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.679833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.679842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.679853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.679863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.679874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.679883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.679894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:57704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.679903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.679915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.679924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.679935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:57720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.679945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.679956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:57728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.679966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.679978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:57736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.679987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.679999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:57744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:57760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:57768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:57776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:57784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:57792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:57800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:57816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:57840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:57864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:57888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:57896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:57904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:57912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:57920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:57928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:57936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:57944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:57952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:57968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:57976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:57992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.670 [2024-10-17 19:31:34.680695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:58000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.670 [2024-10-17 19:31:34.680705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.680716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:58008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.680725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.680737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:58016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.680748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.680759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.680769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.680780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.680790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.680802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:58040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.680811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.680822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:58048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.680837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.680850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:58056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.680859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.680871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:58064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.680880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.680892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:58072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.680901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.680917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:58080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.680927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.680939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:58088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.680948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.680960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:58096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.680973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.680991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:58104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.681027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:58112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.681060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:58120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.681101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:58128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.681123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:58136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.681170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:58144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.681194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:58152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.681216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:58160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.681237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.681257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:58176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.681282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.681307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.681338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.681369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.681390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:58216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.681412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.681446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.681481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:58240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.681512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:58248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.681533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:58256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.681554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:58264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.681575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.681602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:58280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.681622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:58288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.681642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:58296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.681673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:58304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.681707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.681731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:58320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.681751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:58328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.671 [2024-10-17 19:31:34.681771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:58336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.671 [2024-10-17 19:31:34.681781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.681793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:58344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.672 [2024-10-17 19:31:34.681807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.681823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:58352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.672 [2024-10-17 19:31:34.681833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.681845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:58360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.672 [2024-10-17 19:31:34.681854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.681866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:58368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.672 [2024-10-17 19:31:34.681876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.681891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:58376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.672 [2024-10-17 19:31:34.681902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.681915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.672 [2024-10-17 19:31:34.681931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.681946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:58392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.672 [2024-10-17 19:31:34.681958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.681978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:58400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.672 [2024-10-17 19:31:34.682000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.682013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:58408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.672 [2024-10-17 19:31:34.682022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.682034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:58416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.672 [2024-10-17 19:31:34.682055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.682074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:58424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.672 [2024-10-17 19:31:34.682084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.682096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:58432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.672 [2024-10-17 19:31:34.682105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.682118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:58440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.672 [2024-10-17 19:31:34.682147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.682164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:58448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.672 [2024-10-17 19:31:34.682180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.682196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:58456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.672 [2024-10-17 19:31:34.682210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.682224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:58464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.672 [2024-10-17 19:31:34.682234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.682245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:58472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.672 [2024-10-17 19:31:34.682254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.682266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.672 [2024-10-17 19:31:34.682275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.682288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:58488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.672 [2024-10-17 19:31:34.682297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.682309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:58496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.672 [2024-10-17 19:31:34.682319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.682330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.672 [2024-10-17 19:31:34.682339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.682351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:58512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.672 [2024-10-17 19:31:34.682361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.682373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.672 [2024-10-17 19:31:34.682383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.682394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.672 [2024-10-17 19:31:34.682403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.682415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.672 [2024-10-17 19:31:34.682424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.682435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.672 [2024-10-17 19:31:34.682444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.682455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.672 [2024-10-17 19:31:34.682464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.682475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.672 [2024-10-17 19:31:34.682484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.682495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.672 [2024-10-17 19:31:34.682504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.682524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.672 [2024-10-17 19:31:34.682533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.682544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.672 [2024-10-17 19:31:34.682553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.682564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.672 [2024-10-17 19:31:34.682574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.682585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.672 [2024-10-17 19:31:34.682594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.682605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.672 [2024-10-17 19:31:34.682614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.672 [2024-10-17 19:31:34.682626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.672 [2024-10-17 19:31:34.682635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.673 [2024-10-17 19:31:34.682647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.673 [2024-10-17 19:31:34.682656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.673 [2024-10-17 19:31:34.682667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.673 [2024-10-17 19:31:34.682676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.673 [2024-10-17 19:31:34.682699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:58520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.673 [2024-10-17 19:31:34.682709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.673 [2024-10-17 19:31:34.682720] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9c780 is same with the state(6) to be set 00:29:25.673 [2024-10-17 19:31:34.682734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:25.673 [2024-10-17 19:31:34.682742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.673 [2024-10-17 19:31:34.682750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58528 len:8 PRP1 0x0 PRP2 0x0 00:29:25.673 [2024-10-17 19:31:34.682759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.673 [2024-10-17 19:31:34.682844] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b9c780 was disconnected and freed. reset controller. 00:29:25.673 [2024-10-17 19:31:34.683100] nvme_ctrlr.c:1770:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.673 [2024-10-17 19:31:34.683224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1a2e0 (9): Bad file descriptor 00:29:25.673 [2024-10-17 19:31:34.683374] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.673 [2024-10-17 19:31:34.683405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1a2e0 with addr=10.0.0.3, port=4420 00:29:25.673 [2024-10-17 19:31:34.683416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a2e0 is same with the state(6) to be set 00:29:25.673 [2024-10-17 19:31:34.683436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1a2e0 (9): Bad file descriptor 00:29:25.673 [2024-10-17 19:31:34.683453] nvme_ctrlr.c:4250:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.673 [2024-10-17 19:31:34.683462] nvme_ctrlr.c:1868:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.673 [2024-10-17 19:31:34.683474] nvme_ctrlr.c:1152:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.673 [2024-10-17 19:31:34.683495] bdev_nvme.c:2213:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.673 [2024-10-17 19:31:34.683508] nvme_ctrlr.c:1770:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.673 19:31:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:29:26.674 3602.00 IOPS, 14.07 MiB/s [2024-10-17T19:31:35.932Z] [2024-10-17 19:31:35.683712] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.674 [2024-10-17 19:31:35.683804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1a2e0 with addr=10.0.0.3, port=4420 00:29:26.674 [2024-10-17 19:31:35.683826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a2e0 is same with the state(6) to be set 00:29:26.674 [2024-10-17 19:31:35.683857] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1a2e0 (9): Bad file descriptor 00:29:26.674 [2024-10-17 19:31:35.683885] nvme_ctrlr.c:4250:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.674 [2024-10-17 19:31:35.683896] nvme_ctrlr.c:1868:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.674 [2024-10-17 19:31:35.683908] nvme_ctrlr.c:1152:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.674 [2024-10-17 19:31:35.683947] bdev_nvme.c:2213:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.674 [2024-10-17 19:31:35.683964] nvme_ctrlr.c:1770:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.619 2401.33 IOPS, 9.38 MiB/s [2024-10-17T19:31:36.877Z] [2024-10-17 19:31:36.684178] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.619 [2024-10-17 19:31:36.684270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1a2e0 with addr=10.0.0.3, port=4420 00:29:27.619 [2024-10-17 19:31:36.684289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a2e0 is same with the state(6) to be set 00:29:27.619 [2024-10-17 19:31:36.684320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1a2e0 (9): Bad file descriptor 00:29:27.619 [2024-10-17 19:31:36.684343] nvme_ctrlr.c:4250:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.619 [2024-10-17 19:31:36.684354] nvme_ctrlr.c:1868:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.620 [2024-10-17 19:31:36.684368] nvme_ctrlr.c:1152:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.620 [2024-10-17 19:31:36.684401] bdev_nvme.c:2213:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.620 [2024-10-17 19:31:36.684415] nvme_ctrlr.c:1770:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.557 1801.00 IOPS, 7.04 MiB/s [2024-10-17T19:31:37.815Z] [2024-10-17 19:31:37.688125] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.557 [2024-10-17 19:31:37.688236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1a2e0 with addr=10.0.0.3, port=4420 00:29:28.557 [2024-10-17 19:31:37.688254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a2e0 is same with the state(6) to be set 00:29:28.557 [2024-10-17 19:31:37.688532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1a2e0 (9): Bad file descriptor 00:29:28.557 [2024-10-17 19:31:37.688798] nvme_ctrlr.c:4250:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.557 [2024-10-17 19:31:37.688821] nvme_ctrlr.c:1868:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.557 [2024-10-17 19:31:37.688834] nvme_ctrlr.c:1152:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.557 [2024-10-17 19:31:37.692784] bdev_nvme.c:2213:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.557 [2024-10-17 19:31:37.692830] nvme_ctrlr.c:1770:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.557 19:31:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:28.815 [2024-10-17 19:31:37.985031] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:28.815 19:31:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82439 00:29:29.637 1440.80 IOPS, 5.63 MiB/s [2024-10-17T19:31:38.895Z] [2024-10-17 19:31:38.731295] bdev_nvme.c:2215:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:31.503 2485.83 IOPS, 9.71 MiB/s [2024-10-17T19:31:41.694Z] 3413.00 IOPS, 13.33 MiB/s [2024-10-17T19:31:42.628Z] 4121.38 IOPS, 16.10 MiB/s [2024-10-17T19:31:44.001Z] 4665.22 IOPS, 18.22 MiB/s [2024-10-17T19:31:44.001Z] 5104.30 IOPS, 19.94 MiB/s 00:29:34.743 Latency(us) 00:29:34.743 [2024-10-17T19:31:44.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.743 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:34.743 Verification LBA range: start 0x0 length 0x4000 00:29:34.743 NVMe0n1 : 10.01 5111.88 19.97 3599.22 0.00 14655.47 681.43 3019898.88 00:29:34.743 [2024-10-17T19:31:44.001Z] =================================================================================================================== 00:29:34.743 [2024-10-17T19:31:44.001Z] Total : 5111.88 19.97 3599.22 0.00 14655.47 0.00 3019898.88 00:29:34.743 { 00:29:34.743 "results": [ 00:29:34.743 { 00:29:34.743 "job": "NVMe0n1", 00:29:34.743 "core_mask": "0x4", 00:29:34.743 "workload": "verify", 00:29:34.743 "status": "finished", 00:29:34.743 "verify_range": { 00:29:34.743 "start": 0, 00:29:34.743 "length": 16384 00:29:34.744 }, 00:29:34.744 "queue_depth": 128, 00:29:34.744 "io_size": 4096, 00:29:34.744 "runtime": 10.010213, 00:29:34.744 "iops": 5111.879237734502, 00:29:34.744 "mibps": 19.968278272400397, 00:29:34.744 "io_failed": 36029, 00:29:34.744 "io_timeout": 0, 00:29:34.744 "avg_latency_us": 14655.471111259381, 00:29:34.744 "min_latency_us": 681.4254545454545, 00:29:34.744 "max_latency_us": 3019898.88 00:29:34.744 } 00:29:34.744 ], 00:29:34.744 "core_count": 1 00:29:34.744 } 00:29:34.744 19:31:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82318 00:29:34.744 19:31:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 82318 ']' 00:29:34.744 19:31:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 82318 00:29:34.744 19:31:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:29:34.744 19:31:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:34.744 19:31:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82318 00:29:34.744 killing process with pid 82318 00:29:34.744 Received shutdown signal, test time was about 10.000000 seconds 00:29:34.744 00:29:34.744 Latency(us) 00:29:34.744 [2024-10-17T19:31:44.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.744 [2024-10-17T19:31:44.002Z] =================================================================================================================== 00:29:34.744 [2024-10-17T19:31:44.002Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:34.744 19:31:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:29:34.744 19:31:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:29:34.744 19:31:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82318' 00:29:34.744 19:31:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 82318 00:29:34.744 19:31:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 82318 00:29:34.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:34.744 19:31:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82549 00:29:34.744 19:31:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82549 /var/tmp/bdevperf.sock 00:29:34.744 19:31:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:29:34.744 19:31:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 82549 ']' 00:29:34.744 19:31:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:34.744 19:31:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:34.744 19:31:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:34.744 19:31:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:34.744 19:31:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:34.744 [2024-10-17 19:31:43.938815] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:29:34.744 [2024-10-17 19:31:43.939201] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82549 ] 00:29:35.001 [2024-10-17 19:31:44.084776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.001 [2024-10-17 19:31:44.156869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:35.001 [2024-10-17 19:31:44.215614] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:35.259 19:31:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:35.259 19:31:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:29:35.259 19:31:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82552 00:29:35.259 19:31:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82549 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:29:35.259 19:31:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:29:35.517 19:31:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:29:35.775 NVMe0n1 00:29:35.775 19:31:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82598 00:29:35.775 19:31:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:35.775 19:31:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:29:36.032 Running I/O for 10 seconds... 00:29:36.965 19:31:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:37.226 14099.00 IOPS, 55.07 MiB/s [2024-10-17T19:31:46.484Z] [2024-10-17 19:31:46.318494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.318993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.226 [2024-10-17 19:31:46.319403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345250 is same with the state(6) to be set 00:29:37.227 [2024-10-17 19:31:46.319907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.227 [2024-10-17 19:31:46.319948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.227 [2024-10-17 19:31:46.319974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.227 [2024-10-17 19:31:46.319985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.227 [2024-10-17 19:31:46.319998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:28000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.227 [2024-10-17 19:31:46.320008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.227 [2024-10-17 19:31:46.320019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:61568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.227 [2024-10-17 19:31:46.320029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.227 [2024-10-17 19:31:46.320040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:124720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.227 [2024-10-17 19:31:46.320049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.227 [2024-10-17 19:31:46.320061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.227 [2024-10-17 19:31:46.320070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.227 [2024-10-17 19:31:46.320081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.227 [2024-10-17 19:31:46.320090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.227 [2024-10-17 19:31:46.320101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:119000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.227 [2024-10-17 19:31:46.320110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.227 [2024-10-17 19:31:46.320121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.227 [2024-10-17 19:31:46.320142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.227 [2024-10-17 19:31:46.320155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.227 [2024-10-17 19:31:46.320164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.227 [2024-10-17 19:31:46.320175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:39864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.227 [2024-10-17 19:31:46.320184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.227 [2024-10-17 19:31:46.320196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.227 [2024-10-17 19:31:46.320205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.227 [2024-10-17 19:31:46.320216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.227 [2024-10-17 19:31:46.320225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.227 [2024-10-17 19:31:46.320236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.227 [2024-10-17 19:31:46.320245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.227 [2024-10-17 19:31:46.320256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.227 [2024-10-17 19:31:46.320265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.227 [2024-10-17 19:31:46.320279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.227 [2024-10-17 19:31:46.320289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.227 [2024-10-17 19:31:46.320300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.227 [2024-10-17 19:31:46.320312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.227 [2024-10-17 19:31:46.320324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:119520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:119424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:47880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:74864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:69896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.320988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.320999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:43416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.321008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.321019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.321028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.321039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.321048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.321059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.321068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.321079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.321087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.321098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.321107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.321118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:60832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.321127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.321148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.321158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.321169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:53136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.321178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.228 [2024-10-17 19:31:46.321189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.228 [2024-10-17 19:31:46.321198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:35192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:106784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:115320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:37760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:35824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:117896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:56288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:29728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:33784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:105032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:124640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.321982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:116440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.321991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.322003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.322012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.322023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:31672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.322032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.322043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.322064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.322076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.322085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.229 [2024-10-17 19:31:46.322096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.229 [2024-10-17 19:31:46.322105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.230 [2024-10-17 19:31:46.322126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:123976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.230 [2024-10-17 19:31:46.322159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:33056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.230 [2024-10-17 19:31:46.322180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:68600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.230 [2024-10-17 19:31:46.322200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.230 [2024-10-17 19:31:46.322220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:91600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.230 [2024-10-17 19:31:46.322247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.230 [2024-10-17 19:31:46.322267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:127840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.230 [2024-10-17 19:31:46.322287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.230 [2024-10-17 19:31:46.322318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:104872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.230 [2024-10-17 19:31:46.322339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:129248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.230 [2024-10-17 19:31:46.322359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.230 [2024-10-17 19:31:46.322380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.230 [2024-10-17 19:31:46.322400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.230 [2024-10-17 19:31:46.322421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.230 [2024-10-17 19:31:46.322441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:129056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.230 [2024-10-17 19:31:46.322462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.230 [2024-10-17 19:31:46.322481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:29896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.230 [2024-10-17 19:31:46.322501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.230 [2024-10-17 19:31:46.322521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:90856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.230 [2024-10-17 19:31:46.322541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.230 [2024-10-17 19:31:46.322562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.230 [2024-10-17 19:31:46.322590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.230 [2024-10-17 19:31:46.322610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:50696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.230 [2024-10-17 19:31:46.322631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.230 [2024-10-17 19:31:46.322658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b1550 is same with the state(6) to be set 00:29:37.230 [2024-10-17 19:31:46.322683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:37.230 [2024-10-17 19:31:46.322691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:37.230 [2024-10-17 19:31:46.322699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71984 len:8 PRP1 0x0 PRP2 0x0 00:29:37.230 [2024-10-17 19:31:46.322709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322765] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15b1550 was disconnected and freed. reset controller. 00:29:37.230 [2024-10-17 19:31:46.322859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.230 [2024-10-17 19:31:46.322944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.230 [2024-10-17 19:31:46.322968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.230 [2024-10-17 19:31:46.322988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.322998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.230 [2024-10-17 19:31:46.323007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.230 [2024-10-17 19:31:46.323016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15432e0 is same with the state(6) to be set 00:29:37.230 [2024-10-17 19:31:46.323283] nvme_ctrlr.c:1770:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.230 [2024-10-17 19:31:46.323310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15432e0 (9): Bad file descriptor 00:29:37.230 [2024-10-17 19:31:46.323436] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.230 [2024-10-17 19:31:46.323460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15432e0 with addr=10.0.0.3, port=4420 00:29:37.230 [2024-10-17 19:31:46.323471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15432e0 is same with the state(6) to be set 00:29:37.230 [2024-10-17 19:31:46.323490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15432e0 (9): Bad file descriptor 00:29:37.230 [2024-10-17 19:31:46.323506] nvme_ctrlr.c:4250:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.231 [2024-10-17 19:31:46.323516] nvme_ctrlr.c:1868:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.231 [2024-10-17 19:31:46.323528] nvme_ctrlr.c:1152:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.231 [2024-10-17 19:31:46.323556] bdev_nvme.c:2213:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.231 [2024-10-17 19:31:46.336952] nvme_ctrlr.c:1770:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.231 19:31:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82598 00:29:39.105 8131.00 IOPS, 31.76 MiB/s [2024-10-17T19:31:48.364Z] 5420.67 IOPS, 21.17 MiB/s [2024-10-17T19:31:48.364Z] [2024-10-17 19:31:48.337276] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.106 [2024-10-17 19:31:48.337367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15432e0 with addr=10.0.0.3, port=4420 00:29:39.106 [2024-10-17 19:31:48.337391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15432e0 is same with the state(6) to be set 00:29:39.106 [2024-10-17 19:31:48.337429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15432e0 (9): Bad file descriptor 00:29:39.106 [2024-10-17 19:31:48.337461] nvme_ctrlr.c:4250:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.106 [2024-10-17 19:31:48.337475] nvme_ctrlr.c:1868:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.106 [2024-10-17 19:31:48.337491] nvme_ctrlr.c:1152:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.106 [2024-10-17 19:31:48.337534] bdev_nvme.c:2213:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.106 [2024-10-17 19:31:48.337549] nvme_ctrlr.c:1770:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.040 4065.50 IOPS, 15.88 MiB/s [2024-10-17T19:31:50.555Z] 3252.40 IOPS, 12.70 MiB/s [2024-10-17T19:31:50.555Z] [2024-10-17 19:31:50.337797] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.297 [2024-10-17 19:31:50.337898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15432e0 with addr=10.0.0.3, port=4420 00:29:41.297 [2024-10-17 19:31:50.337921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15432e0 is same with the state(6) to be set 00:29:41.297 [2024-10-17 19:31:50.337956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15432e0 (9): Bad file descriptor 00:29:41.297 [2024-10-17 19:31:50.337983] nvme_ctrlr.c:4250:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.297 [2024-10-17 19:31:50.337996] nvme_ctrlr.c:1868:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.297 [2024-10-17 19:31:50.338010] nvme_ctrlr.c:1152:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.297 [2024-10-17 19:31:50.338048] bdev_nvme.c:2213:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.297 [2024-10-17 19:31:50.338078] nvme_ctrlr.c:1770:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.181 2710.33 IOPS, 10.59 MiB/s [2024-10-17T19:31:52.439Z] 2323.14 IOPS, 9.07 MiB/s [2024-10-17T19:31:52.439Z] [2024-10-17 19:31:52.338175] nvme_ctrlr.c:1152:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.181 [2024-10-17 19:31:52.338269] nvme_ctrlr.c:4250:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.181 [2024-10-17 19:31:52.338284] nvme_ctrlr.c:1868:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.181 [2024-10-17 19:31:52.338296] nvme_ctrlr.c:1140:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:29:43.181 [2024-10-17 19:31:52.338329] bdev_nvme.c:2213:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.118 2032.75 IOPS, 7.94 MiB/s 00:29:44.118 Latency(us) 00:29:44.118 [2024-10-17T19:31:53.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.118 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:29:44.118 NVMe0n1 : 8.17 1990.69 7.78 15.67 0.00 63678.67 1608.61 7015926.69 00:29:44.118 [2024-10-17T19:31:53.376Z] =================================================================================================================== 00:29:44.118 [2024-10-17T19:31:53.376Z] Total : 1990.69 7.78 15.67 0.00 63678.67 1608.61 7015926.69 00:29:44.118 { 00:29:44.118 "results": [ 00:29:44.118 { 00:29:44.118 "job": "NVMe0n1", 00:29:44.118 "core_mask": "0x4", 00:29:44.118 "workload": "randread", 00:29:44.118 "status": "finished", 00:29:44.118 "queue_depth": 128, 00:29:44.118 "io_size": 4096, 00:29:44.118 "runtime": 8.169014, 00:29:44.118 "iops": 1990.693124041653, 00:29:44.118 "mibps": 7.776145015787707, 00:29:44.118 "io_failed": 128, 00:29:44.118 "io_timeout": 0, 00:29:44.118 "avg_latency_us": 63678.67108325476, 00:29:44.118 "min_latency_us": 1608.610909090909, 00:29:44.118 "max_latency_us": 7015926.69090909 00:29:44.118 } 00:29:44.118 ], 00:29:44.118 "core_count": 1 00:29:44.118 } 00:29:44.118 19:31:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:44.118 Attaching 5 probes... 00:29:44.118 1459.938655: reset bdev controller NVMe0 00:29:44.118 1460.012066: reconnect bdev controller NVMe0 00:29:44.118 3473.706350: reconnect delay bdev controller NVMe0 00:29:44.118 3473.732878: reconnect bdev controller NVMe0 00:29:44.118 5474.259108: reconnect delay bdev controller NVMe0 00:29:44.118 5474.288308: reconnect bdev controller NVMe0 00:29:44.118 7474.789036: reconnect delay bdev controller NVMe0 00:29:44.118 7474.824267: reconnect bdev controller NVMe0 00:29:44.118 19:31:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:29:44.118 19:31:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:29:44.118 19:31:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82552 00:29:44.118 19:31:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:44.118 19:31:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82549 00:29:44.118 19:31:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 82549 ']' 00:29:44.118 19:31:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 82549 00:29:44.118 19:31:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:29:44.118 19:31:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:44.118 19:31:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82549 00:29:44.376 killing process with pid 82549 00:29:44.376 Received shutdown signal, test time was about 8.242635 seconds 00:29:44.376 00:29:44.376 Latency(us) 00:29:44.376 [2024-10-17T19:31:53.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.376 [2024-10-17T19:31:53.634Z] =================================================================================================================== 00:29:44.376 [2024-10-17T19:31:53.634Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:44.376 19:31:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:29:44.376 19:31:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:29:44.376 19:31:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82549' 00:29:44.376 19:31:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 82549 00:29:44.376 19:31:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 82549 00:29:44.635 19:31:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:44.893 19:31:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:29:44.893 19:31:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:29:44.893 19:31:53 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:44.893 19:31:53 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:29:44.893 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:44.893 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:29:44.893 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:44.893 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:44.893 rmmod nvme_tcp 00:29:44.893 rmmod nvme_fabrics 00:29:44.893 rmmod nvme_keyring 00:29:44.893 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:44.893 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:29:44.893 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:29:44.893 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@515 -- # '[' -n 82133 ']' 00:29:44.893 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # killprocess 82133 00:29:44.893 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 82133 ']' 00:29:44.893 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 82133 00:29:44.893 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:29:44.893 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:44.893 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82133 00:29:45.151 killing process with pid 82133 00:29:45.151 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:45.151 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:45.151 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82133' 00:29:45.151 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 82133 00:29:45.151 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 82133 00:29:45.151 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:45.151 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:45.151 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:45.151 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:29:45.151 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@789 -- # iptables-save 00:29:45.151 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@789 -- # iptables-restore 00:29:45.151 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:45.151 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:45.151 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:45.151 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:45.408 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:45.408 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:45.408 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:45.408 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:45.408 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:45.408 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:45.408 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:45.408 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:45.408 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:45.408 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:45.408 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:45.408 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:45.408 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:45.408 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.408 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.408 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.408 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:29:45.408 ************************************ 00:29:45.408 END TEST nvmf_timeout 00:29:45.408 ************************************ 00:29:45.408 00:29:45.408 real 0m46.586s 00:29:45.408 user 2m16.666s 00:29:45.408 sys 0m6.045s 00:29:45.408 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:45.408 19:31:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:45.667 19:31:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:29:45.667 19:31:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:45.667 ************************************ 00:29:45.667 END TEST nvmf_host 00:29:45.667 ************************************ 00:29:45.667 00:29:45.667 real 5m16.766s 00:29:45.667 user 13m43.263s 00:29:45.667 sys 1m13.649s 00:29:45.667 19:31:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:45.667 19:31:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.667 19:31:54 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:45.667 19:31:54 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:29:45.667 ************************************ 00:29:45.667 END TEST nvmf_tcp 00:29:45.667 ************************************ 00:29:45.667 00:29:45.667 real 13m16.968s 00:29:45.667 user 31m57.449s 00:29:45.667 sys 3m17.268s 00:29:45.667 19:31:54 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:45.667 19:31:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:45.667 19:31:54 -- spdk/autotest.sh@281 -- # [[ 1 -eq 0 ]] 00:29:45.667 19:31:54 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:45.667 19:31:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:45.667 19:31:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:45.667 19:31:54 -- common/autotest_common.sh@10 -- # set +x 00:29:45.667 ************************************ 00:29:45.667 START TEST nvmf_dif 00:29:45.667 ************************************ 00:29:45.667 19:31:54 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:45.667 * Looking for test storage... 00:29:45.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:45.667 19:31:54 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:45.667 19:31:54 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:29:45.667 19:31:54 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:45.926 19:31:54 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:45.926 19:31:54 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:45.926 19:31:54 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:45.926 19:31:54 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:45.926 19:31:54 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:29:45.926 19:31:54 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:29:45.926 19:31:54 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:29:45.926 19:31:54 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:29:45.926 19:31:54 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:29:45.926 19:31:54 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:29:45.926 19:31:54 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:29:45.926 19:31:54 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:45.926 19:31:54 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:29:45.926 19:31:54 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:29:45.926 19:31:54 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:45.926 19:31:54 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:45.926 19:31:54 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:29:45.926 19:31:54 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:29:45.926 19:31:54 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:45.926 19:31:54 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:29:45.926 19:31:54 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:29:45.926 19:31:54 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:29:45.926 19:31:54 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:29:45.926 19:31:54 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:45.926 19:31:54 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:29:45.926 19:31:54 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:29:45.926 19:31:54 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:45.926 19:31:54 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:45.926 19:31:54 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:29:45.926 19:31:54 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:45.926 19:31:54 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:45.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.926 --rc genhtml_branch_coverage=1 00:29:45.926 --rc genhtml_function_coverage=1 00:29:45.926 --rc genhtml_legend=1 00:29:45.926 --rc geninfo_all_blocks=1 00:29:45.926 --rc geninfo_unexecuted_blocks=1 00:29:45.926 00:29:45.926 ' 00:29:45.926 19:31:54 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:45.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.926 --rc genhtml_branch_coverage=1 00:29:45.926 --rc genhtml_function_coverage=1 00:29:45.926 --rc genhtml_legend=1 00:29:45.926 --rc geninfo_all_blocks=1 00:29:45.926 --rc geninfo_unexecuted_blocks=1 00:29:45.926 00:29:45.926 ' 00:29:45.926 19:31:54 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:45.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.926 --rc genhtml_branch_coverage=1 00:29:45.926 --rc genhtml_function_coverage=1 00:29:45.926 --rc genhtml_legend=1 00:29:45.926 --rc geninfo_all_blocks=1 00:29:45.926 --rc geninfo_unexecuted_blocks=1 00:29:45.926 00:29:45.926 ' 00:29:45.926 19:31:54 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:45.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.926 --rc genhtml_branch_coverage=1 00:29:45.926 --rc genhtml_function_coverage=1 00:29:45.926 --rc genhtml_legend=1 00:29:45.926 --rc geninfo_all_blocks=1 00:29:45.926 --rc geninfo_unexecuted_blocks=1 00:29:45.926 00:29:45.926 ' 00:29:45.926 19:31:54 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:45.926 19:31:54 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:29:45.926 19:31:54 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:45.926 19:31:54 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:45.926 19:31:54 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:45.926 19:31:54 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:45.926 19:31:54 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:45.926 19:31:54 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:45.926 19:31:54 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:45.926 19:31:54 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:45.926 19:31:54 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:45.926 19:31:55 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:45.926 19:31:55 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:29:45.926 19:31:55 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:29:45.926 19:31:55 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:45.926 19:31:55 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:45.926 19:31:55 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:45.926 19:31:55 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:45.926 19:31:55 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:45.926 19:31:55 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:29:45.926 19:31:55 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:45.926 19:31:55 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:45.926 19:31:55 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:45.926 19:31:55 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.926 19:31:55 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.926 19:31:55 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.926 19:31:55 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:29:45.927 19:31:55 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:45.927 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:45.927 19:31:55 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:29:45.927 19:31:55 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:45.927 19:31:55 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:45.927 19:31:55 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:29:45.927 19:31:55 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.927 19:31:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:45.927 19:31:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@458 -- # nvmf_veth_init 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:45.927 Cannot find device "nvmf_init_br" 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@162 -- # true 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:45.927 Cannot find device "nvmf_init_br2" 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@163 -- # true 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:45.927 Cannot find device "nvmf_tgt_br" 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@164 -- # true 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:45.927 Cannot find device "nvmf_tgt_br2" 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@165 -- # true 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:45.927 Cannot find device "nvmf_init_br" 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@166 -- # true 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:45.927 Cannot find device "nvmf_init_br2" 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@167 -- # true 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:45.927 Cannot find device "nvmf_tgt_br" 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@168 -- # true 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:45.927 Cannot find device "nvmf_tgt_br2" 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@169 -- # true 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:45.927 Cannot find device "nvmf_br" 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@170 -- # true 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:45.927 Cannot find device "nvmf_init_if" 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@171 -- # true 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:45.927 Cannot find device "nvmf_init_if2" 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@172 -- # true 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:45.927 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@173 -- # true 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:45.927 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@174 -- # true 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:45.927 19:31:55 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:46.185 19:31:55 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:46.185 19:31:55 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:46.185 19:31:55 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:46.185 19:31:55 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:46.185 19:31:55 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:46.185 19:31:55 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:46.185 19:31:55 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:46.185 19:31:55 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:46.185 19:31:55 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:46.185 19:31:55 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:46.185 19:31:55 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:46.185 19:31:55 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:46.185 19:31:55 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:46.185 19:31:55 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:46.186 19:31:55 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:46.186 19:31:55 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:46.186 19:31:55 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:46.186 19:31:55 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:46.186 19:31:55 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:46.186 19:31:55 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:46.186 19:31:55 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:46.186 19:31:55 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:46.186 19:31:55 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:46.186 19:31:55 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:46.186 19:31:55 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:46.186 19:31:55 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:46.186 19:31:55 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:46.186 19:31:55 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:46.186 19:31:55 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:46.186 19:31:55 nvmf_dif -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:46.186 19:31:55 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:46.186 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:46.186 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.116 ms 00:29:46.186 00:29:46.186 --- 10.0.0.3 ping statistics --- 00:29:46.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.186 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:29:46.186 19:31:55 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:46.186 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:46.186 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:29:46.186 00:29:46.186 --- 10.0.0.4 ping statistics --- 00:29:46.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.186 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:29:46.186 19:31:55 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:46.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:46.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:29:46.186 00:29:46.186 --- 10.0.0.1 ping statistics --- 00:29:46.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.186 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:29:46.186 19:31:55 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:46.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:46.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:29:46.444 00:29:46.444 --- 10.0.0.2 ping statistics --- 00:29:46.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.444 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:29:46.444 19:31:55 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:46.444 19:31:55 nvmf_dif -- nvmf/common.sh@459 -- # return 0 00:29:46.444 19:31:55 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:29:46.444 19:31:55 nvmf_dif -- nvmf/common.sh@477 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:46.702 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:46.702 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:46.702 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:46.702 19:31:55 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:46.702 19:31:55 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:46.702 19:31:55 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:46.702 19:31:55 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:46.702 19:31:55 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:46.702 19:31:55 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:46.702 19:31:55 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:46.702 19:31:55 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:29:46.702 19:31:55 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:46.702 19:31:55 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:46.702 19:31:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:46.702 19:31:55 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=83100 00:29:46.702 19:31:55 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:46.702 19:31:55 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 83100 00:29:46.702 19:31:55 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 83100 ']' 00:29:46.702 19:31:55 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.702 19:31:55 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:46.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.702 19:31:55 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.702 19:31:55 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:46.702 19:31:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:46.702 [2024-10-17 19:31:55.951879] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:29:46.702 [2024-10-17 19:31:55.952003] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:46.960 [2024-10-17 19:31:56.095080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.960 [2024-10-17 19:31:56.173583] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:46.960 [2024-10-17 19:31:56.173654] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:46.960 [2024-10-17 19:31:56.173670] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:46.960 [2024-10-17 19:31:56.173681] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:46.960 [2024-10-17 19:31:56.173691] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:46.960 [2024-10-17 19:31:56.174204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.218 [2024-10-17 19:31:56.234851] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:47.218 19:31:56 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:47.218 19:31:56 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:29:47.218 19:31:56 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:47.218 19:31:56 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:47.218 19:31:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:47.218 19:31:56 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.218 19:31:56 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:29:47.218 19:31:56 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:47.218 19:31:56 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.218 19:31:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:47.218 [2024-10-17 19:31:56.350861] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:47.218 19:31:56 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.218 19:31:56 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:47.218 19:31:56 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:47.218 19:31:56 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:47.218 19:31:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:47.218 ************************************ 00:29:47.218 START TEST fio_dif_1_default 00:29:47.218 ************************************ 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:47.218 bdev_null0 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:47.218 [2024-10-17 19:31:56.395039] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:47.218 { 00:29:47.218 "params": { 00:29:47.218 "name": "Nvme$subsystem", 00:29:47.218 "trtype": "$TEST_TRANSPORT", 00:29:47.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:47.218 "adrfam": "ipv4", 00:29:47.218 "trsvcid": "$NVMF_PORT", 00:29:47.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:47.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:47.218 "hdgst": ${hdgst:-false}, 00:29:47.218 "ddgst": ${ddgst:-false} 00:29:47.218 }, 00:29:47.218 "method": "bdev_nvme_attach_controller" 00:29:47.218 } 00:29:47.218 EOF 00:29:47.218 )") 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:47.218 19:31:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:29:47.219 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:29:47.219 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:47.219 19:31:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:29:47.219 19:31:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:29:47.219 19:31:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:47.219 "params": { 00:29:47.219 "name": "Nvme0", 00:29:47.219 "trtype": "tcp", 00:29:47.219 "traddr": "10.0.0.3", 00:29:47.219 "adrfam": "ipv4", 00:29:47.219 "trsvcid": "4420", 00:29:47.219 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:47.219 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:47.219 "hdgst": false, 00:29:47.219 "ddgst": false 00:29:47.219 }, 00:29:47.219 "method": "bdev_nvme_attach_controller" 00:29:47.219 }' 00:29:47.219 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:47.219 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:47.219 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:47.219 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:47.219 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:47.219 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:47.219 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:47.219 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:47.219 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:47.219 19:31:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:47.477 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:47.477 fio-3.35 00:29:47.477 Starting 1 thread 00:29:59.673 00:29:59.673 filename0: (groupid=0, jobs=1): err= 0: pid=83159: Thu Oct 17 19:32:07 2024 00:29:59.673 read: IOPS=8327, BW=32.5MiB/s (34.1MB/s)(325MiB/10001msec) 00:29:59.673 slat (usec): min=7, max=104, avg= 8.82, stdev= 2.57 00:29:59.673 clat (usec): min=175, max=2514, avg=454.40, stdev=44.32 00:29:59.673 lat (usec): min=185, max=2550, avg=463.22, stdev=44.94 00:29:59.673 clat percentiles (usec): 00:29:59.673 | 1.00th=[ 408], 5.00th=[ 416], 10.00th=[ 420], 20.00th=[ 429], 00:29:59.673 | 30.00th=[ 433], 40.00th=[ 441], 50.00th=[ 445], 60.00th=[ 453], 00:29:59.674 | 70.00th=[ 457], 80.00th=[ 469], 90.00th=[ 498], 95.00th=[ 537], 00:29:59.674 | 99.00th=[ 619], 99.50th=[ 635], 99.90th=[ 668], 99.95th=[ 709], 00:29:59.674 | 99.99th=[ 979] 00:29:59.674 bw ( KiB/s): min=30240, max=34944, per=99.82%, avg=33248.00, stdev=1460.24, samples=19 00:29:59.674 iops : min= 7560, max= 8736, avg=8312.00, stdev=365.06, samples=19 00:29:59.674 lat (usec) : 250=0.01%, 500=90.81%, 750=9.15%, 1000=0.02% 00:29:59.674 lat (msec) : 2=0.01%, 4=0.01% 00:29:59.674 cpu : usr=84.09%, sys=13.98%, ctx=73, majf=0, minf=0 00:29:59.674 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:59.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:59.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:59.674 issued rwts: total=83281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:59.674 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:59.674 00:29:59.674 Run status group 0 (all jobs): 00:29:59.674 READ: bw=32.5MiB/s (34.1MB/s), 32.5MiB/s-32.5MiB/s (34.1MB/s-34.1MB/s), io=325MiB (341MB), run=10001-10001msec 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.674 00:29:59.674 real 0m11.068s 00:29:59.674 user 0m9.112s 00:29:59.674 sys 0m1.686s 00:29:59.674 ************************************ 00:29:59.674 END TEST fio_dif_1_default 00:29:59.674 ************************************ 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:59.674 19:32:07 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:29:59.674 19:32:07 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:59.674 19:32:07 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:59.674 19:32:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:59.674 ************************************ 00:29:59.674 START TEST fio_dif_1_multi_subsystems 00:29:59.674 ************************************ 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:59.674 bdev_null0 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:59.674 [2024-10-17 19:32:07.516419] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:59.674 bdev_null1 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:59.674 { 00:29:59.674 "params": { 00:29:59.674 "name": "Nvme$subsystem", 00:29:59.674 "trtype": "$TEST_TRANSPORT", 00:29:59.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:59.674 "adrfam": "ipv4", 00:29:59.674 "trsvcid": "$NVMF_PORT", 00:29:59.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:59.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:59.674 "hdgst": ${hdgst:-false}, 00:29:59.674 "ddgst": ${ddgst:-false} 00:29:59.674 }, 00:29:59.674 "method": "bdev_nvme_attach_controller" 00:29:59.674 } 00:29:59.674 EOF 00:29:59.674 )") 00:29:59.674 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:59.675 { 00:29:59.675 "params": { 00:29:59.675 "name": "Nvme$subsystem", 00:29:59.675 "trtype": "$TEST_TRANSPORT", 00:29:59.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:59.675 "adrfam": "ipv4", 00:29:59.675 "trsvcid": "$NVMF_PORT", 00:29:59.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:59.675 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:59.675 "hdgst": ${hdgst:-false}, 00:29:59.675 "ddgst": ${ddgst:-false} 00:29:59.675 }, 00:29:59.675 "method": "bdev_nvme_attach_controller" 00:29:59.675 } 00:29:59.675 EOF 00:29:59.675 )") 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:59.675 "params": { 00:29:59.675 "name": "Nvme0", 00:29:59.675 "trtype": "tcp", 00:29:59.675 "traddr": "10.0.0.3", 00:29:59.675 "adrfam": "ipv4", 00:29:59.675 "trsvcid": "4420", 00:29:59.675 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:59.675 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:59.675 "hdgst": false, 00:29:59.675 "ddgst": false 00:29:59.675 }, 00:29:59.675 "method": "bdev_nvme_attach_controller" 00:29:59.675 },{ 00:29:59.675 "params": { 00:29:59.675 "name": "Nvme1", 00:29:59.675 "trtype": "tcp", 00:29:59.675 "traddr": "10.0.0.3", 00:29:59.675 "adrfam": "ipv4", 00:29:59.675 "trsvcid": "4420", 00:29:59.675 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:59.675 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:59.675 "hdgst": false, 00:29:59.675 "ddgst": false 00:29:59.675 }, 00:29:59.675 "method": "bdev_nvme_attach_controller" 00:29:59.675 }' 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:59.675 19:32:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:59.675 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:59.675 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:59.675 fio-3.35 00:29:59.675 Starting 2 threads 00:30:09.647 00:30:09.647 filename0: (groupid=0, jobs=1): err= 0: pid=83320: Thu Oct 17 19:32:18 2024 00:30:09.647 read: IOPS=4660, BW=18.2MiB/s (19.1MB/s)(182MiB/10001msec) 00:30:09.647 slat (nsec): min=7202, max=98947, avg=14331.48, stdev=5179.47 00:30:09.647 clat (usec): min=437, max=2836, avg=818.37, stdev=49.57 00:30:09.647 lat (usec): min=445, max=2847, avg=832.71, stdev=51.42 00:30:09.647 clat percentiles (usec): 00:30:09.647 | 1.00th=[ 750], 5.00th=[ 766], 10.00th=[ 775], 20.00th=[ 783], 00:30:09.647 | 30.00th=[ 799], 40.00th=[ 807], 50.00th=[ 816], 60.00th=[ 824], 00:30:09.647 | 70.00th=[ 832], 80.00th=[ 840], 90.00th=[ 873], 95.00th=[ 906], 00:30:09.647 | 99.00th=[ 955], 99.50th=[ 979], 99.90th=[ 1074], 99.95th=[ 1106], 00:30:09.647 | 99.99th=[ 2802] 00:30:09.647 bw ( KiB/s): min=16960, max=19296, per=49.96%, avg=18632.74, stdev=654.09, samples=19 00:30:09.647 iops : min= 4240, max= 4824, avg=4658.16, stdev=163.50, samples=19 00:30:09.647 lat (usec) : 500=0.01%, 750=1.10%, 1000=98.54% 00:30:09.647 lat (msec) : 2=0.33%, 4=0.02% 00:30:09.647 cpu : usr=90.33%, sys=8.20%, ctx=17, majf=0, minf=0 00:30:09.648 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:09.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.648 issued rwts: total=46612,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.648 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:09.648 filename1: (groupid=0, jobs=1): err= 0: pid=83321: Thu Oct 17 19:32:18 2024 00:30:09.648 read: IOPS=4662, BW=18.2MiB/s (19.1MB/s)(182MiB/10001msec) 00:30:09.648 slat (nsec): min=6189, max=52382, avg=13789.27, stdev=4731.58 00:30:09.648 clat (usec): min=431, max=3788, avg=820.81, stdev=57.43 00:30:09.648 lat (usec): min=439, max=3819, avg=834.60, stdev=59.41 00:30:09.648 clat percentiles (usec): 00:30:09.648 | 1.00th=[ 709], 5.00th=[ 734], 10.00th=[ 758], 20.00th=[ 783], 00:30:09.648 | 30.00th=[ 799], 40.00th=[ 807], 50.00th=[ 816], 60.00th=[ 832], 00:30:09.648 | 70.00th=[ 840], 80.00th=[ 857], 90.00th=[ 881], 95.00th=[ 914], 00:30:09.648 | 99.00th=[ 955], 99.50th=[ 979], 99.90th=[ 1074], 99.95th=[ 1090], 00:30:09.648 | 99.99th=[ 1139] 00:30:09.648 bw ( KiB/s): min=16960, max=19328, per=49.98%, avg=18639.16, stdev=650.79, samples=19 00:30:09.648 iops : min= 4240, max= 4832, avg=4659.79, stdev=162.70, samples=19 00:30:09.648 lat (usec) : 500=0.03%, 750=7.79%, 1000=91.86% 00:30:09.648 lat (msec) : 2=0.32%, 4=0.01% 00:30:09.648 cpu : usr=90.34%, sys=8.39%, ctx=8, majf=0, minf=0 00:30:09.648 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:09.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.648 issued rwts: total=46628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.648 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:09.648 00:30:09.648 Run status group 0 (all jobs): 00:30:09.648 READ: bw=36.4MiB/s (38.2MB/s), 18.2MiB/s-18.2MiB/s (19.1MB/s-19.1MB/s), io=364MiB (382MB), run=10001-10001msec 00:30:09.648 19:32:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:09.648 19:32:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:09.648 19:32:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:09.648 19:32:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:09.648 19:32:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:09.648 19:32:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:09.648 19:32:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.648 19:32:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:09.648 19:32:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.648 19:32:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:09.648 19:32:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.648 19:32:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:09.648 19:32:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.648 19:32:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:09.648 19:32:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:09.648 19:32:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:09.648 19:32:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:09.648 19:32:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.648 19:32:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:09.648 19:32:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.648 19:32:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:09.648 19:32:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.648 19:32:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:09.648 19:32:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.648 00:30:09.648 real 0m11.159s 00:30:09.648 user 0m18.837s 00:30:09.648 sys 0m1.955s 00:30:09.648 ************************************ 00:30:09.648 END TEST fio_dif_1_multi_subsystems 00:30:09.648 ************************************ 00:30:09.648 19:32:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:09.648 19:32:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:09.648 19:32:18 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:09.648 19:32:18 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:09.648 19:32:18 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:09.648 19:32:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:09.648 ************************************ 00:30:09.648 START TEST fio_dif_rand_params 00:30:09.648 ************************************ 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:09.648 bdev_null0 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:09.648 [2024-10-17 19:32:18.730658] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:09.648 { 00:30:09.648 "params": { 00:30:09.648 "name": "Nvme$subsystem", 00:30:09.648 "trtype": "$TEST_TRANSPORT", 00:30:09.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:09.648 "adrfam": "ipv4", 00:30:09.648 "trsvcid": "$NVMF_PORT", 00:30:09.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:09.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:09.648 "hdgst": ${hdgst:-false}, 00:30:09.648 "ddgst": ${ddgst:-false} 00:30:09.648 }, 00:30:09.648 "method": "bdev_nvme_attach_controller" 00:30:09.648 } 00:30:09.648 EOF 00:30:09.648 )") 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:09.648 19:32:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:30:09.649 19:32:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:30:09.649 19:32:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:09.649 "params": { 00:30:09.649 "name": "Nvme0", 00:30:09.649 "trtype": "tcp", 00:30:09.649 "traddr": "10.0.0.3", 00:30:09.649 "adrfam": "ipv4", 00:30:09.649 "trsvcid": "4420", 00:30:09.649 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:09.649 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:09.649 "hdgst": false, 00:30:09.649 "ddgst": false 00:30:09.649 }, 00:30:09.649 "method": "bdev_nvme_attach_controller" 00:30:09.649 }' 00:30:09.649 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:09.649 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:09.649 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:09.649 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:09.649 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:09.649 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:09.649 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:09.649 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:09.649 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:09.649 19:32:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:09.918 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:09.918 ... 00:30:09.918 fio-3.35 00:30:09.918 Starting 3 threads 00:30:16.478 00:30:16.478 filename0: (groupid=0, jobs=1): err= 0: pid=83477: Thu Oct 17 19:32:24 2024 00:30:16.478 read: IOPS=253, BW=31.7MiB/s (33.2MB/s)(159MiB/5006msec) 00:30:16.478 slat (nsec): min=7164, max=65387, avg=14878.34, stdev=8675.31 00:30:16.478 clat (usec): min=6461, max=13314, avg=11794.35, stdev=353.69 00:30:16.478 lat (usec): min=6468, max=13349, avg=11809.22, stdev=354.42 00:30:16.478 clat percentiles (usec): 00:30:16.478 | 1.00th=[11469], 5.00th=[11600], 10.00th=[11600], 20.00th=[11731], 00:30:16.478 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11731], 60.00th=[11863], 00:30:16.478 | 70.00th=[11863], 80.00th=[11863], 90.00th=[11994], 95.00th=[11994], 00:30:16.478 | 99.00th=[12780], 99.50th=[13173], 99.90th=[13304], 99.95th=[13304], 00:30:16.478 | 99.99th=[13304] 00:30:16.478 bw ( KiB/s): min=31488, max=33024, per=33.35%, avg=32409.60, stdev=485.73, samples=10 00:30:16.478 iops : min= 246, max= 258, avg=253.20, stdev= 3.79, samples=10 00:30:16.478 lat (msec) : 10=0.47%, 20=99.53% 00:30:16.478 cpu : usr=95.08%, sys=4.30%, ctx=8, majf=0, minf=0 00:30:16.478 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:16.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.478 issued rwts: total=1269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:16.478 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:16.478 filename0: (groupid=0, jobs=1): err= 0: pid=83478: Thu Oct 17 19:32:24 2024 00:30:16.478 read: IOPS=253, BW=31.6MiB/s (33.2MB/s)(158MiB/5002msec) 00:30:16.478 slat (nsec): min=7468, max=40630, avg=12087.64, stdev=5727.57 00:30:16.478 clat (usec): min=11089, max=13671, avg=11820.52, stdev=221.49 00:30:16.478 lat (usec): min=11098, max=13708, avg=11832.60, stdev=221.41 00:30:16.478 clat percentiles (usec): 00:30:16.478 | 1.00th=[11600], 5.00th=[11600], 10.00th=[11600], 20.00th=[11731], 00:30:16.478 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11863], 60.00th=[11863], 00:30:16.478 | 70.00th=[11863], 80.00th=[11863], 90.00th=[11994], 95.00th=[11994], 00:30:16.478 | 99.00th=[12911], 99.50th=[13173], 99.90th=[13698], 99.95th=[13698], 00:30:16.478 | 99.99th=[13698] 00:30:16.478 bw ( KiB/s): min=32256, max=33024, per=33.36%, avg=32426.67, stdev=338.66, samples=9 00:30:16.478 iops : min= 252, max= 258, avg=253.33, stdev= 2.65, samples=9 00:30:16.478 lat (msec) : 20=100.00% 00:30:16.478 cpu : usr=94.54%, sys=4.84%, ctx=7, majf=0, minf=0 00:30:16.478 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:16.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.478 issued rwts: total=1266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:16.478 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:16.478 filename0: (groupid=0, jobs=1): err= 0: pid=83479: Thu Oct 17 19:32:24 2024 00:30:16.478 read: IOPS=253, BW=31.6MiB/s (33.2MB/s)(158MiB/5002msec) 00:30:16.478 slat (nsec): min=5249, max=59648, avg=13778.82, stdev=8150.76 00:30:16.478 clat (usec): min=11499, max=13039, avg=11814.32, stdev=194.71 00:30:16.478 lat (usec): min=11509, max=13061, avg=11828.10, stdev=195.45 00:30:16.478 clat percentiles (usec): 00:30:16.478 | 1.00th=[11600], 5.00th=[11600], 10.00th=[11600], 20.00th=[11731], 00:30:16.478 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11731], 60.00th=[11863], 00:30:16.478 | 70.00th=[11863], 80.00th=[11863], 90.00th=[11994], 95.00th=[12125], 00:30:16.478 | 99.00th=[12780], 99.50th=[12911], 99.90th=[13042], 99.95th=[13042], 00:30:16.478 | 99.99th=[13042] 00:30:16.478 bw ( KiB/s): min=32256, max=33024, per=33.36%, avg=32426.67, stdev=338.66, samples=9 00:30:16.478 iops : min= 252, max= 258, avg=253.33, stdev= 2.65, samples=9 00:30:16.478 lat (msec) : 20=100.00% 00:30:16.478 cpu : usr=94.22%, sys=5.14%, ctx=16, majf=0, minf=0 00:30:16.478 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:16.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.478 issued rwts: total=1266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:16.478 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:16.478 00:30:16.478 Run status group 0 (all jobs): 00:30:16.478 READ: bw=94.9MiB/s (99.5MB/s), 31.6MiB/s-31.7MiB/s (33.2MB/s-33.2MB/s), io=475MiB (498MB), run=5002-5006msec 00:30:16.478 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:16.478 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:16.478 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:16.478 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:16.478 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:16.478 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:16.478 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.478 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.478 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.478 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:16.478 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.478 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.478 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.478 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:16.478 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:16.478 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:16.478 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:16.478 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:16.478 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:16.478 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:16.478 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:16.478 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:16.478 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.479 bdev_null0 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.479 [2024-10-17 19:32:24.749714] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.479 bdev_null1 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.479 bdev_null2 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:16.479 { 00:30:16.479 "params": { 00:30:16.479 "name": "Nvme$subsystem", 00:30:16.479 "trtype": "$TEST_TRANSPORT", 00:30:16.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.479 "adrfam": "ipv4", 00:30:16.479 "trsvcid": "$NVMF_PORT", 00:30:16.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.479 "hdgst": ${hdgst:-false}, 00:30:16.479 "ddgst": ${ddgst:-false} 00:30:16.479 }, 00:30:16.479 "method": "bdev_nvme_attach_controller" 00:30:16.479 } 00:30:16.479 EOF 00:30:16.479 )") 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:16.479 { 00:30:16.479 "params": { 00:30:16.479 "name": "Nvme$subsystem", 00:30:16.479 "trtype": "$TEST_TRANSPORT", 00:30:16.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.479 "adrfam": "ipv4", 00:30:16.479 "trsvcid": "$NVMF_PORT", 00:30:16.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.479 "hdgst": ${hdgst:-false}, 00:30:16.479 "ddgst": ${ddgst:-false} 00:30:16.479 }, 00:30:16.479 "method": "bdev_nvme_attach_controller" 00:30:16.479 } 00:30:16.479 EOF 00:30:16.479 )") 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:16.479 { 00:30:16.479 "params": { 00:30:16.479 "name": "Nvme$subsystem", 00:30:16.479 "trtype": "$TEST_TRANSPORT", 00:30:16.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.479 "adrfam": "ipv4", 00:30:16.479 "trsvcid": "$NVMF_PORT", 00:30:16.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.479 "hdgst": ${hdgst:-false}, 00:30:16.479 "ddgst": ${ddgst:-false} 00:30:16.479 }, 00:30:16.479 "method": "bdev_nvme_attach_controller" 00:30:16.479 } 00:30:16.479 EOF 00:30:16.479 )") 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:30:16.479 19:32:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:16.479 "params": { 00:30:16.479 "name": "Nvme0", 00:30:16.479 "trtype": "tcp", 00:30:16.479 "traddr": "10.0.0.3", 00:30:16.479 "adrfam": "ipv4", 00:30:16.479 "trsvcid": "4420", 00:30:16.479 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:16.479 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:16.479 "hdgst": false, 00:30:16.479 "ddgst": false 00:30:16.479 }, 00:30:16.479 "method": "bdev_nvme_attach_controller" 00:30:16.479 },{ 00:30:16.479 "params": { 00:30:16.480 "name": "Nvme1", 00:30:16.480 "trtype": "tcp", 00:30:16.480 "traddr": "10.0.0.3", 00:30:16.480 "adrfam": "ipv4", 00:30:16.480 "trsvcid": "4420", 00:30:16.480 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:16.480 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:16.480 "hdgst": false, 00:30:16.480 "ddgst": false 00:30:16.480 }, 00:30:16.480 "method": "bdev_nvme_attach_controller" 00:30:16.480 },{ 00:30:16.480 "params": { 00:30:16.480 "name": "Nvme2", 00:30:16.480 "trtype": "tcp", 00:30:16.480 "traddr": "10.0.0.3", 00:30:16.480 "adrfam": "ipv4", 00:30:16.480 "trsvcid": "4420", 00:30:16.480 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:16.480 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:16.480 "hdgst": false, 00:30:16.480 "ddgst": false 00:30:16.480 }, 00:30:16.480 "method": "bdev_nvme_attach_controller" 00:30:16.480 }' 00:30:16.480 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:16.480 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:16.480 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:16.480 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:16.480 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:16.480 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:16.480 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:16.480 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:16.480 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:16.480 19:32:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:16.480 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:16.480 ... 00:30:16.480 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:16.480 ... 00:30:16.480 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:16.480 ... 00:30:16.480 fio-3.35 00:30:16.480 Starting 24 threads 00:30:28.746 00:30:28.746 filename0: (groupid=0, jobs=1): err= 0: pid=83575: Thu Oct 17 19:32:35 2024 00:30:28.746 read: IOPS=216, BW=865KiB/s (885kB/s)(8648KiB/10002msec) 00:30:28.746 slat (usec): min=4, max=4042, avg=25.77, stdev=117.96 00:30:28.746 clat (msec): min=2, max=152, avg=73.90, stdev=25.38 00:30:28.746 lat (msec): min=2, max=152, avg=73.93, stdev=25.38 00:30:28.746 clat percentiles (msec): 00:30:28.746 | 1.00th=[ 16], 5.00th=[ 37], 10.00th=[ 48], 20.00th=[ 51], 00:30:28.746 | 30.00th=[ 59], 40.00th=[ 68], 50.00th=[ 73], 60.00th=[ 81], 00:30:28.746 | 70.00th=[ 84], 80.00th=[ 93], 90.00th=[ 111], 95.00th=[ 122], 00:30:28.746 | 99.00th=[ 133], 99.50th=[ 136], 99.90th=[ 153], 99.95th=[ 153], 00:30:28.746 | 99.99th=[ 153] 00:30:28.746 bw ( KiB/s): min= 616, max= 1397, per=4.35%, avg=862.89, stdev=169.01, samples=19 00:30:28.746 iops : min= 154, max= 349, avg=215.68, stdev=42.24, samples=19 00:30:28.746 lat (msec) : 4=0.60%, 10=0.14%, 20=0.32%, 50=18.92%, 100=63.83% 00:30:28.746 lat (msec) : 250=16.19% 00:30:28.746 cpu : usr=38.25%, sys=1.81%, ctx=1068, majf=0, minf=9 00:30:28.746 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=83.1%, 16=15.5%, 32=0.0%, >=64=0.0% 00:30:28.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.746 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.746 issued rwts: total=2162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.746 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:28.746 filename0: (groupid=0, jobs=1): err= 0: pid=83576: Thu Oct 17 19:32:35 2024 00:30:28.746 read: IOPS=215, BW=861KiB/s (882kB/s)(8616KiB/10004msec) 00:30:28.746 slat (usec): min=4, max=8038, avg=41.10, stdev=361.58 00:30:28.746 clat (msec): min=8, max=153, avg=74.13, stdev=25.13 00:30:28.746 lat (msec): min=8, max=153, avg=74.18, stdev=25.13 00:30:28.746 clat percentiles (msec): 00:30:28.746 | 1.00th=[ 21], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 51], 00:30:28.746 | 30.00th=[ 60], 40.00th=[ 68], 50.00th=[ 73], 60.00th=[ 81], 00:30:28.746 | 70.00th=[ 85], 80.00th=[ 94], 90.00th=[ 111], 95.00th=[ 121], 00:30:28.746 | 99.00th=[ 134], 99.50th=[ 136], 99.90th=[ 155], 99.95th=[ 155], 00:30:28.746 | 99.99th=[ 155] 00:30:28.746 bw ( KiB/s): min= 604, max= 1392, per=4.33%, avg=859.11, stdev=169.75, samples=19 00:30:28.746 iops : min= 151, max= 348, avg=214.74, stdev=42.47, samples=19 00:30:28.746 lat (msec) : 10=0.42%, 20=0.46%, 50=18.34%, 100=65.09%, 250=15.69% 00:30:28.746 cpu : usr=38.25%, sys=1.84%, ctx=1148, majf=0, minf=9 00:30:28.746 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=83.1%, 16=15.6%, 32=0.0%, >=64=0.0% 00:30:28.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.746 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.746 issued rwts: total=2154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.746 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:28.746 filename0: (groupid=0, jobs=1): err= 0: pid=83577: Thu Oct 17 19:32:35 2024 00:30:28.746 read: IOPS=214, BW=860KiB/s (880kB/s)(8636KiB/10044msec) 00:30:28.746 slat (usec): min=3, max=4115, avg=25.98, stdev=149.09 00:30:28.746 clat (msec): min=8, max=155, avg=74.26, stdev=25.60 00:30:28.746 lat (msec): min=8, max=155, avg=74.28, stdev=25.59 00:30:28.746 clat percentiles (msec): 00:30:28.746 | 1.00th=[ 23], 5.00th=[ 32], 10.00th=[ 45], 20.00th=[ 53], 00:30:28.746 | 30.00th=[ 58], 40.00th=[ 69], 50.00th=[ 74], 60.00th=[ 81], 00:30:28.746 | 70.00th=[ 85], 80.00th=[ 94], 90.00th=[ 111], 95.00th=[ 121], 00:30:28.746 | 99.00th=[ 132], 99.50th=[ 133], 99.90th=[ 155], 99.95th=[ 155], 00:30:28.746 | 99.99th=[ 157] 00:30:28.746 bw ( KiB/s): min= 584, max= 1752, per=4.31%, avg=856.60, stdev=239.53, samples=20 00:30:28.746 iops : min= 146, max= 438, avg=214.10, stdev=59.90, samples=20 00:30:28.746 lat (msec) : 10=0.05%, 20=0.51%, 50=14.91%, 100=68.46%, 250=16.07% 00:30:28.746 cpu : usr=40.58%, sys=2.05%, ctx=1515, majf=0, minf=9 00:30:28.746 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:30:28.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.747 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.747 issued rwts: total=2159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:28.747 filename0: (groupid=0, jobs=1): err= 0: pid=83578: Thu Oct 17 19:32:35 2024 00:30:28.747 read: IOPS=208, BW=834KiB/s (854kB/s)(8396KiB/10069msec) 00:30:28.747 slat (usec): min=4, max=8045, avg=38.01, stdev=373.81 00:30:28.747 clat (usec): min=809, max=179955, avg=76452.66, stdev=28897.16 00:30:28.747 lat (usec): min=819, max=179964, avg=76490.67, stdev=28908.96 00:30:28.747 clat percentiles (msec): 00:30:28.747 | 1.00th=[ 4], 5.00th=[ 24], 10.00th=[ 36], 20.00th=[ 52], 00:30:28.747 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 84], 00:30:28.747 | 70.00th=[ 86], 80.00th=[ 97], 90.00th=[ 118], 95.00th=[ 121], 00:30:28.747 | 99.00th=[ 133], 99.50th=[ 134], 99.90th=[ 157], 99.95th=[ 169], 00:30:28.747 | 99.99th=[ 180] 00:30:28.747 bw ( KiB/s): min= 528, max= 2200, per=4.20%, avg=833.20, stdev=343.82, samples=20 00:30:28.747 iops : min= 132, max= 550, avg=208.30, stdev=85.96, samples=20 00:30:28.747 lat (usec) : 1000=0.10% 00:30:28.747 lat (msec) : 4=2.05%, 10=0.14%, 20=1.62%, 50=15.01%, 100=63.36% 00:30:28.747 lat (msec) : 250=17.72% 00:30:28.747 cpu : usr=31.77%, sys=1.52%, ctx=883, majf=0, minf=9 00:30:28.747 IO depths : 1=0.1%, 2=0.6%, 4=1.9%, 8=80.9%, 16=16.4%, 32=0.0%, >=64=0.0% 00:30:28.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.747 complete : 0=0.0%, 4=88.1%, 8=11.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.747 issued rwts: total=2099,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:28.747 filename0: (groupid=0, jobs=1): err= 0: pid=83579: Thu Oct 17 19:32:35 2024 00:30:28.747 read: IOPS=207, BW=829KiB/s (849kB/s)(8340KiB/10059msec) 00:30:28.747 slat (usec): min=7, max=8045, avg=30.85, stdev=291.97 00:30:28.747 clat (msec): min=14, max=155, avg=76.94, stdev=26.08 00:30:28.747 lat (msec): min=14, max=155, avg=76.97, stdev=26.08 00:30:28.747 clat percentiles (msec): 00:30:28.747 | 1.00th=[ 18], 5.00th=[ 34], 10.00th=[ 45], 20.00th=[ 57], 00:30:28.747 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 84], 00:30:28.747 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 122], 00:30:28.747 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 157], 99.95th=[ 157], 00:30:28.747 | 99.99th=[ 157] 00:30:28.747 bw ( KiB/s): min= 592, max= 1808, per=4.17%, avg=827.60, stdev=249.95, samples=20 00:30:28.747 iops : min= 148, max= 452, avg=206.90, stdev=62.49, samples=20 00:30:28.747 lat (msec) : 20=2.11%, 50=14.05%, 100=66.67%, 250=17.17% 00:30:28.747 cpu : usr=32.48%, sys=1.53%, ctx=900, majf=0, minf=9 00:30:28.747 IO depths : 1=0.1%, 2=1.0%, 4=4.1%, 8=79.0%, 16=15.8%, 32=0.0%, >=64=0.0% 00:30:28.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.747 complete : 0=0.0%, 4=88.4%, 8=10.7%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.747 issued rwts: total=2085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:28.747 filename0: (groupid=0, jobs=1): err= 0: pid=83580: Thu Oct 17 19:32:35 2024 00:30:28.747 read: IOPS=213, BW=854KiB/s (875kB/s)(8552KiB/10011msec) 00:30:28.747 slat (usec): min=5, max=8028, avg=27.19, stdev=245.05 00:30:28.747 clat (msec): min=22, max=162, avg=74.79, stdev=24.58 00:30:28.747 lat (msec): min=22, max=162, avg=74.82, stdev=24.59 00:30:28.747 clat percentiles (msec): 00:30:28.747 | 1.00th=[ 30], 5.00th=[ 35], 10.00th=[ 47], 20.00th=[ 52], 00:30:28.747 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 82], 00:30:28.747 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 110], 95.00th=[ 121], 00:30:28.747 | 99.00th=[ 132], 99.50th=[ 133], 99.90th=[ 163], 99.95th=[ 163], 00:30:28.747 | 99.99th=[ 163] 00:30:28.747 bw ( KiB/s): min= 616, max= 1448, per=4.29%, avg=851.10, stdev=180.78, samples=20 00:30:28.747 iops : min= 154, max= 362, avg=212.75, stdev=45.22, samples=20 00:30:28.747 lat (msec) : 50=18.29%, 100=66.84%, 250=14.87% 00:30:28.747 cpu : usr=34.78%, sys=1.62%, ctx=1041, majf=0, minf=9 00:30:28.747 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.9%, 16=15.5%, 32=0.0%, >=64=0.0% 00:30:28.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.747 complete : 0=0.0%, 4=87.2%, 8=12.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.747 issued rwts: total=2138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:28.747 filename0: (groupid=0, jobs=1): err= 0: pid=83581: Thu Oct 17 19:32:35 2024 00:30:28.747 read: IOPS=210, BW=841KiB/s (861kB/s)(8428KiB/10020msec) 00:30:28.747 slat (usec): min=3, max=10051, avg=38.88, stdev=383.44 00:30:28.747 clat (msec): min=24, max=155, avg=75.90, stdev=24.39 00:30:28.747 lat (msec): min=24, max=155, avg=75.94, stdev=24.40 00:30:28.747 clat percentiles (msec): 00:30:28.747 | 1.00th=[ 28], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 56], 00:30:28.747 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 83], 00:30:28.747 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 110], 95.00th=[ 122], 00:30:28.747 | 99.00th=[ 132], 99.50th=[ 134], 99.90th=[ 157], 99.95th=[ 157], 00:30:28.747 | 99.99th=[ 157] 00:30:28.747 bw ( KiB/s): min= 614, max= 1392, per=4.21%, avg=836.40, stdev=169.26, samples=20 00:30:28.747 iops : min= 153, max= 348, avg=209.05, stdev=42.37, samples=20 00:30:28.747 lat (msec) : 50=16.09%, 100=68.34%, 250=15.57% 00:30:28.747 cpu : usr=37.35%, sys=1.69%, ctx=1109, majf=0, minf=9 00:30:28.747 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=81.1%, 16=15.4%, 32=0.0%, >=64=0.0% 00:30:28.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.747 complete : 0=0.0%, 4=87.5%, 8=11.9%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.747 issued rwts: total=2107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:28.747 filename0: (groupid=0, jobs=1): err= 0: pid=83582: Thu Oct 17 19:32:35 2024 00:30:28.747 read: IOPS=208, BW=835KiB/s (855kB/s)(8372KiB/10024msec) 00:30:28.747 slat (usec): min=4, max=10037, avg=36.43, stdev=327.68 00:30:28.747 clat (msec): min=23, max=157, avg=76.42, stdev=24.26 00:30:28.747 lat (msec): min=23, max=157, avg=76.46, stdev=24.25 00:30:28.747 clat percentiles (msec): 00:30:28.747 | 1.00th=[ 27], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 55], 00:30:28.747 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 83], 00:30:28.747 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 111], 95.00th=[ 121], 00:30:28.747 | 99.00th=[ 134], 99.50th=[ 136], 99.90th=[ 159], 99.95th=[ 159], 00:30:28.747 | 99.99th=[ 159] 00:30:28.747 bw ( KiB/s): min= 613, max= 1282, per=4.18%, avg=830.55, stdev=148.02, samples=20 00:30:28.747 iops : min= 153, max= 320, avg=207.55, stdev=36.97, samples=20 00:30:28.747 lat (msec) : 50=15.53%, 100=67.65%, 250=16.82% 00:30:28.747 cpu : usr=38.27%, sys=1.70%, ctx=1080, majf=0, minf=9 00:30:28.747 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=80.7%, 16=15.3%, 32=0.0%, >=64=0.0% 00:30:28.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.747 complete : 0=0.0%, 4=87.5%, 8=11.8%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.747 issued rwts: total=2093,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:28.747 filename1: (groupid=0, jobs=1): err= 0: pid=83583: Thu Oct 17 19:32:35 2024 00:30:28.747 read: IOPS=208, BW=832KiB/s (852kB/s)(8372KiB/10062msec) 00:30:28.747 slat (usec): min=7, max=8023, avg=26.61, stdev=207.00 00:30:28.747 clat (msec): min=18, max=156, avg=76.67, stdev=25.73 00:30:28.747 lat (msec): min=18, max=156, avg=76.70, stdev=25.73 00:30:28.747 clat percentiles (msec): 00:30:28.747 | 1.00th=[ 19], 5.00th=[ 35], 10.00th=[ 46], 20.00th=[ 55], 00:30:28.747 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 83], 00:30:28.747 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 114], 95.00th=[ 121], 00:30:28.747 | 99.00th=[ 132], 99.50th=[ 138], 99.90th=[ 157], 99.95th=[ 157], 00:30:28.747 | 99.99th=[ 157] 00:30:28.747 bw ( KiB/s): min= 568, max= 1676, per=4.18%, avg=830.60, stdev=229.52, samples=20 00:30:28.747 iops : min= 142, max= 419, avg=207.65, stdev=57.38, samples=20 00:30:28.747 lat (msec) : 20=1.43%, 50=13.81%, 100=66.65%, 250=18.11% 00:30:28.747 cpu : usr=34.25%, sys=1.68%, ctx=1142, majf=0, minf=9 00:30:28.747 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=81.5%, 16=16.1%, 32=0.0%, >=64=0.0% 00:30:28.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.747 complete : 0=0.0%, 4=87.7%, 8=11.8%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.747 issued rwts: total=2093,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:28.747 filename1: (groupid=0, jobs=1): err= 0: pid=83584: Thu Oct 17 19:32:35 2024 00:30:28.747 read: IOPS=192, BW=772KiB/s (790kB/s)(7732KiB/10021msec) 00:30:28.747 slat (usec): min=4, max=8111, avg=33.59, stdev=303.67 00:30:28.747 clat (msec): min=26, max=187, avg=82.74, stdev=27.55 00:30:28.747 lat (msec): min=26, max=187, avg=82.78, stdev=27.56 00:30:28.747 clat percentiles (msec): 00:30:28.747 | 1.00th=[ 32], 5.00th=[ 35], 10.00th=[ 48], 20.00th=[ 59], 00:30:28.747 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 83], 60.00th=[ 85], 00:30:28.747 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 121], 95.00th=[ 132], 00:30:28.747 | 99.00th=[ 159], 99.50th=[ 180], 99.90th=[ 188], 99.95th=[ 188], 00:30:28.747 | 99.99th=[ 188] 00:30:28.747 bw ( KiB/s): min= 399, max= 1408, per=3.86%, avg=766.70, stdev=202.48, samples=20 00:30:28.747 iops : min= 99, max= 352, avg=191.60, stdev=50.70, samples=20 00:30:28.747 lat (msec) : 50=12.67%, 100=63.27%, 250=24.06% 00:30:28.747 cpu : usr=38.87%, sys=1.65%, ctx=1238, majf=0, minf=9 00:30:28.747 IO depths : 1=0.1%, 2=2.8%, 4=11.3%, 8=71.6%, 16=14.3%, 32=0.0%, >=64=0.0% 00:30:28.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.747 complete : 0=0.0%, 4=90.1%, 8=7.4%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.747 issued rwts: total=1933,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:28.747 filename1: (groupid=0, jobs=1): err= 0: pid=83585: Thu Oct 17 19:32:35 2024 00:30:28.747 read: IOPS=224, BW=897KiB/s (918kB/s)(9044KiB/10087msec) 00:30:28.747 slat (usec): min=5, max=4097, avg=28.98, stdev=203.22 00:30:28.747 clat (usec): min=1592, max=153971, avg=71091.26, stdev=32625.43 00:30:28.747 lat (usec): min=1602, max=153996, avg=71120.24, stdev=32626.30 00:30:28.747 clat percentiles (usec): 00:30:28.747 | 1.00th=[ 1663], 5.00th=[ 3425], 10.00th=[ 22676], 20.00th=[ 48497], 00:30:28.747 | 30.00th=[ 57410], 40.00th=[ 67634], 50.00th=[ 76022], 60.00th=[ 80217], 00:30:28.747 | 70.00th=[ 85459], 80.00th=[ 95945], 90.00th=[113771], 95.00th=[123208], 00:30:28.747 | 99.00th=[135267], 99.50th=[135267], 99.90th=[154141], 99.95th=[154141], 00:30:28.747 | 99.99th=[154141] 00:30:28.747 bw ( KiB/s): min= 544, max= 3200, per=4.52%, avg=897.90, stdev=551.55, samples=20 00:30:28.747 iops : min= 136, max= 800, avg=224.45, stdev=137.89, samples=20 00:30:28.747 lat (msec) : 2=3.54%, 4=3.72%, 10=0.53%, 20=1.95%, 50=11.94% 00:30:28.747 lat (msec) : 100=60.42%, 250=17.91% 00:30:28.747 cpu : usr=43.67%, sys=2.08%, ctx=1411, majf=0, minf=0 00:30:28.747 IO depths : 1=0.4%, 2=1.8%, 4=5.8%, 8=76.9%, 16=15.1%, 32=0.0%, >=64=0.0% 00:30:28.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.748 complete : 0=0.0%, 4=88.7%, 8=10.0%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.748 issued rwts: total=2261,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.748 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:28.748 filename1: (groupid=0, jobs=1): err= 0: pid=83586: Thu Oct 17 19:32:35 2024 00:30:28.748 read: IOPS=211, BW=845KiB/s (865kB/s)(8472KiB/10025msec) 00:30:28.748 slat (usec): min=3, max=8033, avg=32.64, stdev=308.54 00:30:28.748 clat (msec): min=20, max=163, avg=75.57, stdev=24.51 00:30:28.748 lat (msec): min=20, max=163, avg=75.60, stdev=24.52 00:30:28.748 clat percentiles (msec): 00:30:28.748 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 54], 00:30:28.748 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 83], 00:30:28.748 | 70.00th=[ 85], 80.00th=[ 94], 90.00th=[ 110], 95.00th=[ 122], 00:30:28.748 | 99.00th=[ 132], 99.50th=[ 134], 99.90th=[ 163], 99.95th=[ 163], 00:30:28.748 | 99.99th=[ 163] 00:30:28.748 bw ( KiB/s): min= 608, max= 1456, per=4.23%, avg=840.45, stdev=181.85, samples=20 00:30:28.748 iops : min= 152, max= 364, avg=210.05, stdev=45.50, samples=20 00:30:28.748 lat (msec) : 50=17.89%, 100=66.43%, 250=15.68% 00:30:28.748 cpu : usr=31.84%, sys=1.65%, ctx=917, majf=0, minf=9 00:30:28.748 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.6%, 16=15.7%, 32=0.0%, >=64=0.0% 00:30:28.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.748 complete : 0=0.0%, 4=87.2%, 8=12.6%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.748 issued rwts: total=2118,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.748 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:28.748 filename1: (groupid=0, jobs=1): err= 0: pid=83587: Thu Oct 17 19:32:35 2024 00:30:28.748 read: IOPS=210, BW=842KiB/s (863kB/s)(8464KiB/10047msec) 00:30:28.748 slat (usec): min=5, max=8063, avg=33.02, stdev=247.08 00:30:28.748 clat (msec): min=23, max=158, avg=75.69, stdev=25.13 00:30:28.748 lat (msec): min=23, max=158, avg=75.73, stdev=25.14 00:30:28.748 clat percentiles (msec): 00:30:28.748 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 54], 00:30:28.748 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 82], 00:30:28.748 | 70.00th=[ 85], 80.00th=[ 94], 90.00th=[ 112], 95.00th=[ 122], 00:30:28.748 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 159], 99.95th=[ 159], 00:30:28.748 | 99.99th=[ 159] 00:30:28.748 bw ( KiB/s): min= 608, max= 1662, per=4.24%, avg=842.50, stdev=222.87, samples=20 00:30:28.748 iops : min= 152, max= 415, avg=210.45, stdev=55.69, samples=20 00:30:28.748 lat (msec) : 50=15.03%, 100=68.48%, 250=16.49% 00:30:28.748 cpu : usr=43.97%, sys=2.07%, ctx=1302, majf=0, minf=9 00:30:28.748 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.4%, 16=15.9%, 32=0.0%, >=64=0.0% 00:30:28.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.748 complete : 0=0.0%, 4=87.4%, 8=12.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.748 issued rwts: total=2116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.748 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:28.748 filename1: (groupid=0, jobs=1): err= 0: pid=83588: Thu Oct 17 19:32:35 2024 00:30:28.748 read: IOPS=189, BW=758KiB/s (776kB/s)(7620KiB/10058msec) 00:30:28.748 slat (usec): min=3, max=8051, avg=32.19, stdev=298.94 00:30:28.748 clat (msec): min=23, max=153, avg=84.15, stdev=25.10 00:30:28.748 lat (msec): min=23, max=153, avg=84.18, stdev=25.10 00:30:28.748 clat percentiles (msec): 00:30:28.748 | 1.00th=[ 27], 5.00th=[ 36], 10.00th=[ 50], 20.00th=[ 68], 00:30:28.748 | 30.00th=[ 73], 40.00th=[ 80], 50.00th=[ 84], 60.00th=[ 88], 00:30:28.748 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 122], 95.00th=[ 128], 00:30:28.748 | 99.00th=[ 136], 99.50th=[ 140], 99.90th=[ 155], 99.95th=[ 155], 00:30:28.748 | 99.99th=[ 155] 00:30:28.748 bw ( KiB/s): min= 512, max= 1424, per=3.82%, avg=758.00, stdev=190.09, samples=20 00:30:28.748 iops : min= 128, max= 356, avg=189.50, stdev=47.52, samples=20 00:30:28.748 lat (msec) : 50=10.50%, 100=64.51%, 250=24.99% 00:30:28.748 cpu : usr=42.42%, sys=2.12%, ctx=1322, majf=0, minf=9 00:30:28.748 IO depths : 1=0.1%, 2=2.9%, 4=11.4%, 8=70.8%, 16=14.8%, 32=0.0%, >=64=0.0% 00:30:28.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.748 complete : 0=0.0%, 4=90.6%, 8=6.9%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.748 issued rwts: total=1905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.748 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:28.748 filename1: (groupid=0, jobs=1): err= 0: pid=83589: Thu Oct 17 19:32:35 2024 00:30:28.748 read: IOPS=208, BW=834KiB/s (854kB/s)(8392KiB/10064msec) 00:30:28.748 slat (usec): min=7, max=8043, avg=55.54, stdev=523.94 00:30:28.748 clat (msec): min=13, max=159, avg=76.39, stdev=27.05 00:30:28.748 lat (msec): min=13, max=159, avg=76.44, stdev=27.05 00:30:28.748 clat percentiles (msec): 00:30:28.748 | 1.00th=[ 16], 5.00th=[ 25], 10.00th=[ 37], 20.00th=[ 57], 00:30:28.748 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 84], 00:30:28.748 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 118], 95.00th=[ 122], 00:30:28.748 | 99.00th=[ 133], 99.50th=[ 136], 99.90th=[ 159], 99.95th=[ 159], 00:30:28.748 | 99.99th=[ 159] 00:30:28.748 bw ( KiB/s): min= 536, max= 1900, per=4.19%, avg=832.60, stdev=275.86, samples=20 00:30:28.748 iops : min= 134, max= 475, avg=208.15, stdev=68.97, samples=20 00:30:28.748 lat (msec) : 20=2.43%, 50=14.78%, 100=65.44%, 250=17.35% 00:30:28.748 cpu : usr=35.34%, sys=1.57%, ctx=1046, majf=0, minf=9 00:30:28.748 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=81.9%, 16=16.4%, 32=0.0%, >=64=0.0% 00:30:28.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.748 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.748 issued rwts: total=2098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.748 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:28.748 filename1: (groupid=0, jobs=1): err= 0: pid=83590: Thu Oct 17 19:32:35 2024 00:30:28.748 read: IOPS=205, BW=822KiB/s (842kB/s)(8248KiB/10030msec) 00:30:28.748 slat (usec): min=4, max=8036, avg=38.98, stdev=369.39 00:30:28.748 clat (msec): min=23, max=155, avg=77.57, stdev=24.38 00:30:28.748 lat (msec): min=23, max=155, avg=77.61, stdev=24.38 00:30:28.748 clat percentiles (msec): 00:30:28.748 | 1.00th=[ 27], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 58], 00:30:28.748 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 84], 00:30:28.748 | 70.00th=[ 85], 80.00th=[ 97], 90.00th=[ 113], 95.00th=[ 122], 00:30:28.748 | 99.00th=[ 133], 99.50th=[ 136], 99.90th=[ 157], 99.95th=[ 157], 00:30:28.748 | 99.99th=[ 157] 00:30:28.748 bw ( KiB/s): min= 608, max= 1520, per=4.14%, avg=821.10, stdev=191.20, samples=20 00:30:28.748 iops : min= 152, max= 380, avg=205.25, stdev=47.83, samples=20 00:30:28.748 lat (msec) : 50=15.08%, 100=67.07%, 250=17.85% 00:30:28.748 cpu : usr=37.95%, sys=1.64%, ctx=1134, majf=0, minf=9 00:30:28.748 IO depths : 1=0.1%, 2=1.3%, 4=5.0%, 8=78.4%, 16=15.2%, 32=0.0%, >=64=0.0% 00:30:28.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.748 complete : 0=0.0%, 4=88.3%, 8=10.6%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.748 issued rwts: total=2062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.748 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:28.748 filename2: (groupid=0, jobs=1): err= 0: pid=83591: Thu Oct 17 19:32:35 2024 00:30:28.748 read: IOPS=208, BW=834KiB/s (854kB/s)(8364KiB/10029msec) 00:30:28.748 slat (usec): min=4, max=4037, avg=27.82, stdev=152.23 00:30:28.748 clat (msec): min=23, max=156, avg=76.56, stdev=24.96 00:30:28.748 lat (msec): min=23, max=156, avg=76.58, stdev=24.96 00:30:28.748 clat percentiles (msec): 00:30:28.748 | 1.00th=[ 26], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 54], 00:30:28.748 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 83], 00:30:28.748 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 112], 95.00th=[ 122], 00:30:28.748 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 157], 99.95th=[ 157], 00:30:28.748 | 99.99th=[ 157] 00:30:28.748 bw ( KiB/s): min= 608, max= 1410, per=4.19%, avg=832.80, stdev=168.55, samples=20 00:30:28.748 iops : min= 152, max= 352, avg=208.15, stdev=42.08, samples=20 00:30:28.748 lat (msec) : 50=17.60%, 100=65.66%, 250=16.74% 00:30:28.748 cpu : usr=36.10%, sys=1.72%, ctx=1025, majf=0, minf=9 00:30:28.748 IO depths : 1=0.1%, 2=0.7%, 4=3.0%, 8=80.8%, 16=15.5%, 32=0.0%, >=64=0.0% 00:30:28.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.748 complete : 0=0.0%, 4=87.6%, 8=11.7%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.748 issued rwts: total=2091,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.748 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:28.748 filename2: (groupid=0, jobs=1): err= 0: pid=83592: Thu Oct 17 19:32:35 2024 00:30:28.748 read: IOPS=201, BW=805KiB/s (824kB/s)(8072KiB/10029msec) 00:30:28.748 slat (usec): min=4, max=8050, avg=36.17, stdev=357.08 00:30:28.748 clat (msec): min=26, max=155, avg=79.27, stdev=23.68 00:30:28.748 lat (msec): min=26, max=155, avg=79.31, stdev=23.69 00:30:28.748 clat percentiles (msec): 00:30:28.748 | 1.00th=[ 35], 5.00th=[ 38], 10.00th=[ 48], 20.00th=[ 60], 00:30:28.748 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 83], 60.00th=[ 85], 00:30:28.748 | 70.00th=[ 85], 80.00th=[ 97], 90.00th=[ 110], 95.00th=[ 121], 00:30:28.748 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 157], 99.95th=[ 157], 00:30:28.748 | 99.99th=[ 157] 00:30:28.748 bw ( KiB/s): min= 611, max= 1298, per=4.04%, avg=802.95, stdev=148.14, samples=20 00:30:28.748 iops : min= 152, max= 324, avg=200.65, stdev=37.03, samples=20 00:30:28.748 lat (msec) : 50=14.12%, 100=67.94%, 250=17.94% 00:30:28.748 cpu : usr=31.66%, sys=1.45%, ctx=884, majf=0, minf=9 00:30:28.748 IO depths : 1=0.1%, 2=1.3%, 4=5.3%, 8=78.0%, 16=15.3%, 32=0.0%, >=64=0.0% 00:30:28.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.748 complete : 0=0.0%, 4=88.4%, 8=10.4%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.748 issued rwts: total=2018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.748 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:28.748 filename2: (groupid=0, jobs=1): err= 0: pid=83593: Thu Oct 17 19:32:35 2024 00:30:28.748 read: IOPS=185, BW=744KiB/s (761kB/s)(7480KiB/10059msec) 00:30:28.748 slat (usec): min=4, max=8063, avg=37.83, stdev=338.28 00:30:28.748 clat (msec): min=22, max=180, avg=85.65, stdev=28.23 00:30:28.748 lat (msec): min=22, max=180, avg=85.69, stdev=28.26 00:30:28.748 clat percentiles (msec): 00:30:28.748 | 1.00th=[ 26], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 68], 00:30:28.748 | 30.00th=[ 74], 40.00th=[ 80], 50.00th=[ 83], 60.00th=[ 87], 00:30:28.748 | 70.00th=[ 96], 80.00th=[ 111], 90.00th=[ 127], 95.00th=[ 134], 00:30:28.748 | 99.00th=[ 150], 99.50th=[ 180], 99.90th=[ 180], 99.95th=[ 180], 00:30:28.748 | 99.99th=[ 180] 00:30:28.748 bw ( KiB/s): min= 512, max= 1536, per=3.74%, avg=741.60, stdev=218.41, samples=20 00:30:28.748 iops : min= 128, max= 384, avg=185.40, stdev=54.60, samples=20 00:30:28.748 lat (msec) : 50=11.07%, 100=60.75%, 250=28.18% 00:30:28.748 cpu : usr=42.16%, sys=2.05%, ctx=1425, majf=0, minf=9 00:30:28.748 IO depths : 1=0.2%, 2=4.1%, 4=15.9%, 8=66.0%, 16=13.8%, 32=0.0%, >=64=0.0% 00:30:28.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.748 complete : 0=0.0%, 4=91.7%, 8=4.8%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.749 issued rwts: total=1870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:28.749 filename2: (groupid=0, jobs=1): err= 0: pid=83594: Thu Oct 17 19:32:35 2024 00:30:28.749 read: IOPS=210, BW=843KiB/s (863kB/s)(8472KiB/10055msec) 00:30:28.749 slat (usec): min=4, max=8065, avg=28.95, stdev=230.74 00:30:28.749 clat (msec): min=12, max=157, avg=75.67, stdev=25.05 00:30:28.749 lat (msec): min=12, max=157, avg=75.70, stdev=25.05 00:30:28.749 clat percentiles (msec): 00:30:28.749 | 1.00th=[ 27], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 54], 00:30:28.749 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 77], 60.00th=[ 82], 00:30:28.749 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 113], 95.00th=[ 123], 00:30:28.749 | 99.00th=[ 133], 99.50th=[ 138], 99.90th=[ 157], 99.95th=[ 157], 00:30:28.749 | 99.99th=[ 157] 00:30:28.749 bw ( KiB/s): min= 584, max= 1544, per=4.25%, avg=843.20, stdev=204.64, samples=20 00:30:28.749 iops : min= 146, max= 386, avg=210.80, stdev=51.16, samples=20 00:30:28.749 lat (msec) : 20=0.09%, 50=15.49%, 100=68.27%, 250=16.15% 00:30:28.749 cpu : usr=41.37%, sys=1.98%, ctx=1329, majf=0, minf=9 00:30:28.749 IO depths : 1=0.1%, 2=0.5%, 4=2.2%, 8=81.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:30:28.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.749 complete : 0=0.0%, 4=87.5%, 8=12.0%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.749 issued rwts: total=2118,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:28.749 filename2: (groupid=0, jobs=1): err= 0: pid=83595: Thu Oct 17 19:32:35 2024 00:30:28.749 read: IOPS=201, BW=808KiB/s (827kB/s)(8128KiB/10063msec) 00:30:28.749 slat (usec): min=5, max=8041, avg=32.14, stdev=281.54 00:30:28.749 clat (msec): min=13, max=162, avg=78.96, stdev=26.07 00:30:28.749 lat (msec): min=13, max=162, avg=78.99, stdev=26.06 00:30:28.749 clat percentiles (msec): 00:30:28.749 | 1.00th=[ 21], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 57], 00:30:28.749 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 81], 60.00th=[ 84], 00:30:28.749 | 70.00th=[ 88], 80.00th=[ 99], 90.00th=[ 118], 95.00th=[ 124], 00:30:28.749 | 99.00th=[ 134], 99.50th=[ 136], 99.90th=[ 157], 99.95th=[ 157], 00:30:28.749 | 99.99th=[ 163] 00:30:28.749 bw ( KiB/s): min= 584, max= 1656, per=4.06%, avg=806.40, stdev=226.22, samples=20 00:30:28.749 iops : min= 146, max= 414, avg=201.60, stdev=56.56, samples=20 00:30:28.749 lat (msec) : 20=0.79%, 50=13.73%, 100=66.83%, 250=18.65% 00:30:28.749 cpu : usr=39.47%, sys=1.67%, ctx=1012, majf=0, minf=9 00:30:28.749 IO depths : 1=0.1%, 2=0.6%, 4=2.1%, 8=80.7%, 16=16.5%, 32=0.0%, >=64=0.0% 00:30:28.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.749 complete : 0=0.0%, 4=88.2%, 8=11.3%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.749 issued rwts: total=2032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:28.749 filename2: (groupid=0, jobs=1): err= 0: pid=83596: Thu Oct 17 19:32:35 2024 00:30:28.749 read: IOPS=206, BW=827KiB/s (847kB/s)(8316KiB/10058msec) 00:30:28.749 slat (usec): min=8, max=8023, avg=28.50, stdev=232.52 00:30:28.749 clat (msec): min=13, max=167, avg=77.13, stdev=27.23 00:30:28.749 lat (msec): min=13, max=167, avg=77.16, stdev=27.22 00:30:28.749 clat percentiles (msec): 00:30:28.749 | 1.00th=[ 20], 5.00th=[ 29], 10.00th=[ 43], 20.00th=[ 53], 00:30:28.749 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 84], 00:30:28.749 | 70.00th=[ 87], 80.00th=[ 99], 90.00th=[ 118], 95.00th=[ 122], 00:30:28.749 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 167], 99.95th=[ 169], 00:30:28.749 | 99.99th=[ 169] 00:30:28.749 bw ( KiB/s): min= 528, max= 1772, per=4.15%, avg=825.00, stdev=252.05, samples=20 00:30:28.749 iops : min= 132, max= 443, avg=206.25, stdev=63.01, samples=20 00:30:28.749 lat (msec) : 20=1.44%, 50=15.82%, 100=63.92%, 250=18.81% 00:30:28.749 cpu : usr=34.76%, sys=1.90%, ctx=1035, majf=0, minf=9 00:30:28.749 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=80.3%, 16=16.1%, 32=0.0%, >=64=0.0% 00:30:28.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.749 complete : 0=0.0%, 4=88.1%, 8=11.3%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.749 issued rwts: total=2079,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:28.749 filename2: (groupid=0, jobs=1): err= 0: pid=83597: Thu Oct 17 19:32:35 2024 00:30:28.749 read: IOPS=213, BW=853KiB/s (874kB/s)(8552KiB/10021msec) 00:30:28.749 slat (usec): min=5, max=8047, avg=31.81, stdev=300.61 00:30:28.749 clat (msec): min=22, max=155, avg=74.88, stdev=24.93 00:30:28.749 lat (msec): min=22, max=155, avg=74.91, stdev=24.95 00:30:28.749 clat percentiles (msec): 00:30:28.749 | 1.00th=[ 25], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 50], 00:30:28.749 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 83], 00:30:28.749 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 121], 00:30:28.749 | 99.00th=[ 132], 99.50th=[ 133], 99.90th=[ 157], 99.95th=[ 157], 00:30:28.749 | 99.99th=[ 157] 00:30:28.749 bw ( KiB/s): min= 608, max= 1520, per=4.27%, avg=848.65, stdev=192.54, samples=20 00:30:28.749 iops : min= 152, max= 380, avg=212.10, stdev=48.16, samples=20 00:30:28.749 lat (msec) : 50=21.00%, 100=63.80%, 250=15.20% 00:30:28.749 cpu : usr=31.72%, sys=1.42%, ctx=871, majf=0, minf=9 00:30:28.749 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.9%, 16=15.7%, 32=0.0%, >=64=0.0% 00:30:28.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.749 complete : 0=0.0%, 4=87.0%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.749 issued rwts: total=2138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:28.749 filename2: (groupid=0, jobs=1): err= 0: pid=83598: Thu Oct 17 19:32:35 2024 00:30:28.749 read: IOPS=209, BW=837KiB/s (857kB/s)(8388KiB/10027msec) 00:30:28.749 slat (usec): min=3, max=8053, avg=44.07, stdev=428.72 00:30:28.749 clat (msec): min=23, max=154, avg=76.26, stdev=24.27 00:30:28.749 lat (msec): min=23, max=155, avg=76.30, stdev=24.27 00:30:28.749 clat percentiles (msec): 00:30:28.749 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 58], 00:30:28.749 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 83], 00:30:28.749 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 110], 95.00th=[ 121], 00:30:28.749 | 99.00th=[ 132], 99.50th=[ 134], 99.90th=[ 155], 99.95th=[ 155], 00:30:28.749 | 99.99th=[ 155] 00:30:28.749 bw ( KiB/s): min= 611, max= 1488, per=4.20%, avg=834.50, stdev=184.99, samples=20 00:30:28.749 iops : min= 152, max= 372, avg=208.55, stdev=46.33, samples=20 00:30:28.749 lat (msec) : 50=16.02%, 100=68.86%, 250=15.12% 00:30:28.749 cpu : usr=32.31%, sys=1.57%, ctx=894, majf=0, minf=9 00:30:28.749 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.8%, 16=15.9%, 32=0.0%, >=64=0.0% 00:30:28.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.749 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.749 issued rwts: total=2097,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:28.749 00:30:28.749 Run status group 0 (all jobs): 00:30:28.749 READ: bw=19.4MiB/s (20.3MB/s), 744KiB/s-897KiB/s (761kB/s-918kB/s), io=195MiB (205MB), run=10002-10087msec 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:28.749 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.750 bdev_null0 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.750 [2024-10-17 19:32:36.248042] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.750 bdev_null1 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:28.750 { 00:30:28.750 "params": { 00:30:28.750 "name": "Nvme$subsystem", 00:30:28.750 "trtype": "$TEST_TRANSPORT", 00:30:28.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.750 "adrfam": "ipv4", 00:30:28.750 "trsvcid": "$NVMF_PORT", 00:30:28.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.750 "hdgst": ${hdgst:-false}, 00:30:28.750 "ddgst": ${ddgst:-false} 00:30:28.750 }, 00:30:28.750 "method": "bdev_nvme_attach_controller" 00:30:28.750 } 00:30:28.750 EOF 00:30:28.750 )") 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:28.750 { 00:30:28.750 "params": { 00:30:28.750 "name": "Nvme$subsystem", 00:30:28.750 "trtype": "$TEST_TRANSPORT", 00:30:28.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.750 "adrfam": "ipv4", 00:30:28.750 "trsvcid": "$NVMF_PORT", 00:30:28.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.750 "hdgst": ${hdgst:-false}, 00:30:28.750 "ddgst": ${ddgst:-false} 00:30:28.750 }, 00:30:28.750 "method": "bdev_nvme_attach_controller" 00:30:28.750 } 00:30:28.750 EOF 00:30:28.750 )") 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:28.750 "params": { 00:30:28.750 "name": "Nvme0", 00:30:28.750 "trtype": "tcp", 00:30:28.750 "traddr": "10.0.0.3", 00:30:28.750 "adrfam": "ipv4", 00:30:28.750 "trsvcid": "4420", 00:30:28.750 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:28.750 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:28.750 "hdgst": false, 00:30:28.750 "ddgst": false 00:30:28.750 }, 00:30:28.750 "method": "bdev_nvme_attach_controller" 00:30:28.750 },{ 00:30:28.750 "params": { 00:30:28.750 "name": "Nvme1", 00:30:28.750 "trtype": "tcp", 00:30:28.750 "traddr": "10.0.0.3", 00:30:28.750 "adrfam": "ipv4", 00:30:28.750 "trsvcid": "4420", 00:30:28.750 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:28.750 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:28.750 "hdgst": false, 00:30:28.750 "ddgst": false 00:30:28.750 }, 00:30:28.750 "method": "bdev_nvme_attach_controller" 00:30:28.750 }' 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:28.750 19:32:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:28.750 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:28.750 ... 00:30:28.750 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:28.750 ... 00:30:28.750 fio-3.35 00:30:28.750 Starting 4 threads 00:30:32.931 00:30:32.931 filename0: (groupid=0, jobs=1): err= 0: pid=83753: Thu Oct 17 19:32:42 2024 00:30:32.931 read: IOPS=2031, BW=15.9MiB/s (16.6MB/s)(79.4MiB/5001msec) 00:30:32.931 slat (nsec): min=7593, max=76048, avg=20942.52, stdev=11052.19 00:30:32.931 clat (usec): min=525, max=7635, avg=3870.63, stdev=1022.94 00:30:32.931 lat (usec): min=539, max=7670, avg=3891.57, stdev=1023.91 00:30:32.931 clat percentiles (usec): 00:30:32.931 | 1.00th=[ 1385], 5.00th=[ 2024], 10.00th=[ 2278], 20.00th=[ 2802], 00:30:32.931 | 30.00th=[ 3392], 40.00th=[ 3982], 50.00th=[ 4178], 60.00th=[ 4490], 00:30:32.931 | 70.00th=[ 4621], 80.00th=[ 4686], 90.00th=[ 4883], 95.00th=[ 5014], 00:30:32.931 | 99.00th=[ 5407], 99.50th=[ 5604], 99.90th=[ 6849], 99.95th=[ 7177], 00:30:32.931 | 99.99th=[ 7373] 00:30:32.931 bw ( KiB/s): min=14320, max=18288, per=25.68%, avg=16334.00, stdev=1433.48, samples=9 00:30:32.931 iops : min= 1790, max= 2286, avg=2041.67, stdev=179.24, samples=9 00:30:32.931 lat (usec) : 750=0.01%, 1000=0.16% 00:30:32.931 lat (msec) : 2=4.56%, 4=36.04%, 10=59.24% 00:30:32.931 cpu : usr=93.72%, sys=5.24%, ctx=4, majf=0, minf=10 00:30:32.931 IO depths : 1=1.3%, 2=9.7%, 4=58.6%, 8=30.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:32.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.931 complete : 0=0.0%, 4=96.3%, 8=3.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.931 issued rwts: total=10159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.931 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:32.931 filename0: (groupid=0, jobs=1): err= 0: pid=83754: Thu Oct 17 19:32:42 2024 00:30:32.931 read: IOPS=1963, BW=15.3MiB/s (16.1MB/s)(76.7MiB/5001msec) 00:30:32.931 slat (nsec): min=7501, max=82838, avg=20560.83, stdev=11170.75 00:30:32.931 clat (usec): min=625, max=8840, avg=4006.76, stdev=1065.73 00:30:32.931 lat (usec): min=632, max=8856, avg=4027.32, stdev=1066.57 00:30:32.931 clat percentiles (usec): 00:30:32.931 | 1.00th=[ 1237], 5.00th=[ 2073], 10.00th=[ 2311], 20.00th=[ 2868], 00:30:32.931 | 30.00th=[ 3818], 40.00th=[ 4113], 50.00th=[ 4293], 60.00th=[ 4490], 00:30:32.931 | 70.00th=[ 4686], 80.00th=[ 4817], 90.00th=[ 5014], 95.00th=[ 5145], 00:30:32.931 | 99.00th=[ 6194], 99.50th=[ 6325], 99.90th=[ 7373], 99.95th=[ 8717], 00:30:32.931 | 99.99th=[ 8848] 00:30:32.931 bw ( KiB/s): min=13632, max=19616, per=24.52%, avg=15598.00, stdev=1956.53, samples=9 00:30:32.931 iops : min= 1704, max= 2452, avg=1949.67, stdev=244.57, samples=9 00:30:32.931 lat (usec) : 750=0.05%, 1000=0.22% 00:30:32.931 lat (msec) : 2=4.29%, 4=30.88%, 10=64.56% 00:30:32.931 cpu : usr=94.16%, sys=4.84%, ctx=8, majf=0, minf=10 00:30:32.931 IO depths : 1=1.3%, 2=13.0%, 4=56.6%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:32.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.931 complete : 0=0.0%, 4=94.9%, 8=5.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.931 issued rwts: total=9819,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.931 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:32.931 filename1: (groupid=0, jobs=1): err= 0: pid=83755: Thu Oct 17 19:32:42 2024 00:30:32.931 read: IOPS=2056, BW=16.1MiB/s (16.8MB/s)(80.4MiB/5003msec) 00:30:32.931 slat (nsec): min=7256, max=76000, avg=17100.48, stdev=9791.54 00:30:32.931 clat (usec): min=504, max=7350, avg=3837.41, stdev=1017.28 00:30:32.931 lat (usec): min=518, max=7384, avg=3854.51, stdev=1017.84 00:30:32.931 clat percentiles (usec): 00:30:32.931 | 1.00th=[ 1385], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2638], 00:30:32.931 | 30.00th=[ 3195], 40.00th=[ 3982], 50.00th=[ 4228], 60.00th=[ 4359], 00:30:32.931 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 4883], 95.00th=[ 5080], 00:30:32.931 | 99.00th=[ 5276], 99.50th=[ 5342], 99.90th=[ 6259], 99.95th=[ 7177], 00:30:32.931 | 99.99th=[ 7373] 00:30:32.931 bw ( KiB/s): min=14528, max=18560, per=26.29%, avg=16725.22, stdev=1313.53, samples=9 00:30:32.931 iops : min= 1816, max= 2320, avg=2090.56, stdev=164.23, samples=9 00:30:32.931 lat (usec) : 750=0.01%, 1000=0.27% 00:30:32.931 lat (msec) : 2=3.10%, 4=37.24%, 10=59.37% 00:30:32.931 cpu : usr=93.86%, sys=5.12%, ctx=11, majf=0, minf=0 00:30:32.931 IO depths : 1=0.7%, 2=9.6%, 4=58.8%, 8=30.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:32.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.931 complete : 0=0.0%, 4=96.3%, 8=3.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.931 issued rwts: total=10289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.931 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:32.931 filename1: (groupid=0, jobs=1): err= 0: pid=83756: Thu Oct 17 19:32:42 2024 00:30:32.931 read: IOPS=1901, BW=14.9MiB/s (15.6MB/s)(74.3MiB/5002msec) 00:30:32.931 slat (nsec): min=7562, max=79287, avg=23739.37, stdev=10146.69 00:30:32.931 clat (usec): min=506, max=7366, avg=4122.46, stdev=882.46 00:30:32.931 lat (usec): min=519, max=7393, avg=4146.20, stdev=882.53 00:30:32.931 clat percentiles (usec): 00:30:32.931 | 1.00th=[ 1385], 5.00th=[ 2311], 10.00th=[ 2671], 20.00th=[ 3458], 00:30:32.931 | 30.00th=[ 4015], 40.00th=[ 4228], 50.00th=[ 4424], 60.00th=[ 4555], 00:30:32.931 | 70.00th=[ 4686], 80.00th=[ 4752], 90.00th=[ 4883], 95.00th=[ 5080], 00:30:32.931 | 99.00th=[ 5473], 99.50th=[ 5604], 99.90th=[ 5997], 99.95th=[ 7046], 00:30:32.931 | 99.99th=[ 7373] 00:30:32.931 bw ( KiB/s): min=14080, max=16800, per=23.69%, avg=15066.67, stdev=981.20, samples=9 00:30:32.931 iops : min= 1760, max= 2100, avg=1883.33, stdev=122.65, samples=9 00:30:32.931 lat (usec) : 750=0.02%, 1000=0.06% 00:30:32.931 lat (msec) : 2=2.02%, 4=27.33%, 10=70.57% 00:30:32.931 cpu : usr=94.74%, sys=4.40%, ctx=9, majf=0, minf=9 00:30:32.931 IO depths : 1=1.8%, 2=14.9%, 4=55.6%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:32.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.931 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.931 issued rwts: total=9511,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.931 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:32.931 00:30:32.931 Run status group 0 (all jobs): 00:30:32.931 READ: bw=62.1MiB/s (65.1MB/s), 14.9MiB/s-16.1MiB/s (15.6MB/s-16.8MB/s), io=311MiB (326MB), run=5001-5003msec 00:30:33.189 19:32:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:33.189 19:32:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:33.189 19:32:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:33.190 19:32:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:33.190 19:32:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:33.190 19:32:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:33.190 19:32:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.190 19:32:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:33.190 19:32:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.190 19:32:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:33.190 19:32:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.190 19:32:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:33.190 19:32:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.190 19:32:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:33.190 19:32:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:33.190 19:32:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:33.190 19:32:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:33.190 19:32:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.190 19:32:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:33.190 19:32:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.190 19:32:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:33.190 19:32:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.190 19:32:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:33.190 19:32:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.190 00:30:33.190 real 0m23.694s 00:30:33.190 user 2m4.888s 00:30:33.190 sys 0m6.888s 00:30:33.190 19:32:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:33.190 19:32:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:33.190 ************************************ 00:30:33.190 END TEST fio_dif_rand_params 00:30:33.190 ************************************ 00:30:33.190 19:32:42 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:33.190 19:32:42 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:33.190 19:32:42 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:33.190 19:32:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:33.448 ************************************ 00:30:33.448 START TEST fio_dif_digest 00:30:33.448 ************************************ 00:30:33.448 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:30:33.448 19:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:30:33.448 19:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:33.448 19:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:30:33.448 19:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:30:33.448 19:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:33.448 19:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:30:33.448 19:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:30:33.448 19:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:30:33.448 19:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:30:33.448 19:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:30:33.448 19:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:30:33.448 19:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:30:33.448 19:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:30:33.448 19:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:30:33.448 19:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:30:33.448 19:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:33.448 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:33.449 bdev_null0 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:33.449 [2024-10-17 19:32:42.491152] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:33.449 { 00:30:33.449 "params": { 00:30:33.449 "name": "Nvme$subsystem", 00:30:33.449 "trtype": "$TEST_TRANSPORT", 00:30:33.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:33.449 "adrfam": "ipv4", 00:30:33.449 "trsvcid": "$NVMF_PORT", 00:30:33.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:33.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:33.449 "hdgst": ${hdgst:-false}, 00:30:33.449 "ddgst": ${ddgst:-false} 00:30:33.449 }, 00:30:33.449 "method": "bdev_nvme_attach_controller" 00:30:33.449 } 00:30:33.449 EOF 00:30:33.449 )") 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:33.449 "params": { 00:30:33.449 "name": "Nvme0", 00:30:33.449 "trtype": "tcp", 00:30:33.449 "traddr": "10.0.0.3", 00:30:33.449 "adrfam": "ipv4", 00:30:33.449 "trsvcid": "4420", 00:30:33.449 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:33.449 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:33.449 "hdgst": true, 00:30:33.449 "ddgst": true 00:30:33.449 }, 00:30:33.449 "method": "bdev_nvme_attach_controller" 00:30:33.449 }' 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:33.449 19:32:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:33.707 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:33.707 ... 00:30:33.707 fio-3.35 00:30:33.707 Starting 3 threads 00:30:45.927 00:30:45.927 filename0: (groupid=0, jobs=1): err= 0: pid=83861: Thu Oct 17 19:32:53 2024 00:30:45.927 read: IOPS=225, BW=28.2MiB/s (29.5MB/s)(282MiB/10009msec) 00:30:45.927 slat (nsec): min=5233, max=73747, avg=11222.49, stdev=4518.15 00:30:45.927 clat (usec): min=10367, max=14743, avg=13280.66, stdev=284.34 00:30:45.927 lat (usec): min=10375, max=14762, avg=13291.88, stdev=284.44 00:30:45.927 clat percentiles (usec): 00:30:45.927 | 1.00th=[13042], 5.00th=[13042], 10.00th=[13042], 20.00th=[13042], 00:30:45.927 | 30.00th=[13173], 40.00th=[13173], 50.00th=[13173], 60.00th=[13304], 00:30:45.927 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13566], 95.00th=[13698], 00:30:45.927 | 99.00th=[14353], 99.50th=[14615], 99.90th=[14746], 99.95th=[14746], 00:30:45.927 | 99.99th=[14746] 00:30:45.927 bw ( KiB/s): min=27648, max=29184, per=33.35%, avg=28860.63, stdev=466.16, samples=19 00:30:45.927 iops : min= 216, max= 228, avg=225.47, stdev= 3.64, samples=19 00:30:45.927 lat (msec) : 20=100.00% 00:30:45.927 cpu : usr=90.51%, sys=8.92%, ctx=5, majf=0, minf=0 00:30:45.927 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:45.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:45.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:45.927 issued rwts: total=2256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:45.927 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:45.927 filename0: (groupid=0, jobs=1): err= 0: pid=83862: Thu Oct 17 19:32:53 2024 00:30:45.927 read: IOPS=225, BW=28.2MiB/s (29.5MB/s)(282MiB/10009msec) 00:30:45.927 slat (nsec): min=8108, max=84957, avg=11957.77, stdev=4697.52 00:30:45.927 clat (usec): min=8794, max=15315, avg=13278.43, stdev=318.97 00:30:45.927 lat (usec): min=8803, max=15351, avg=13290.39, stdev=319.11 00:30:45.927 clat percentiles (usec): 00:30:45.927 | 1.00th=[13042], 5.00th=[13042], 10.00th=[13042], 20.00th=[13042], 00:30:45.927 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13173], 60.00th=[13304], 00:30:45.927 | 70.00th=[13435], 80.00th=[13435], 90.00th=[13566], 95.00th=[13698], 00:30:45.927 | 99.00th=[14484], 99.50th=[14615], 99.90th=[15270], 99.95th=[15270], 00:30:45.927 | 99.99th=[15270] 00:30:45.927 bw ( KiB/s): min=27648, max=29952, per=33.36%, avg=28863.58, stdev=529.38, samples=19 00:30:45.927 iops : min= 216, max= 234, avg=225.47, stdev= 4.15, samples=19 00:30:45.927 lat (msec) : 10=0.13%, 20=99.87% 00:30:45.927 cpu : usr=91.12%, sys=8.21%, ctx=22, majf=0, minf=0 00:30:45.927 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:45.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:45.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:45.927 issued rwts: total=2256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:45.927 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:45.927 filename0: (groupid=0, jobs=1): err= 0: pid=83863: Thu Oct 17 19:32:53 2024 00:30:45.927 read: IOPS=225, BW=28.2MiB/s (29.5MB/s)(282MiB/10012msec) 00:30:45.927 slat (nsec): min=7144, max=56506, avg=11691.67, stdev=4890.27 00:30:45.927 clat (usec): min=12982, max=14784, avg=13284.00, stdev=264.67 00:30:45.927 lat (usec): min=12993, max=14819, avg=13295.69, stdev=265.17 00:30:45.927 clat percentiles (usec): 00:30:45.927 | 1.00th=[13042], 5.00th=[13042], 10.00th=[13042], 20.00th=[13042], 00:30:45.927 | 30.00th=[13173], 40.00th=[13173], 50.00th=[13173], 60.00th=[13304], 00:30:45.927 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13566], 95.00th=[13698], 00:30:45.927 | 99.00th=[14484], 99.50th=[14615], 99.90th=[14746], 99.95th=[14746], 00:30:45.927 | 99.99th=[14746] 00:30:45.927 bw ( KiB/s): min=27648, max=29184, per=33.33%, avg=28835.45, stdev=462.36, samples=20 00:30:45.927 iops : min= 216, max= 228, avg=225.25, stdev= 3.60, samples=20 00:30:45.927 lat (msec) : 20=100.00% 00:30:45.927 cpu : usr=90.38%, sys=8.92%, ctx=27, majf=0, minf=0 00:30:45.927 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:45.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:45.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:45.927 issued rwts: total=2256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:45.927 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:45.927 00:30:45.927 Run status group 0 (all jobs): 00:30:45.927 READ: bw=84.5MiB/s (88.6MB/s), 28.2MiB/s-28.2MiB/s (29.5MB/s-29.5MB/s), io=846MiB (887MB), run=10009-10012msec 00:30:45.927 19:32:53 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:45.927 19:32:53 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:30:45.927 19:32:53 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:30:45.927 19:32:53 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:45.927 19:32:53 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:30:45.927 19:32:53 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:45.927 19:32:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.927 19:32:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:45.927 19:32:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.927 19:32:53 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:45.927 19:32:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.927 19:32:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:45.927 19:32:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.927 00:30:45.927 real 0m11.085s 00:30:45.927 user 0m27.930s 00:30:45.927 sys 0m2.905s 00:30:45.927 19:32:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:45.927 19:32:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:45.927 ************************************ 00:30:45.927 END TEST fio_dif_digest 00:30:45.927 ************************************ 00:30:45.927 19:32:53 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:45.927 19:32:53 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:30:45.927 19:32:53 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:45.927 19:32:53 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:30:45.927 19:32:53 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:45.927 19:32:53 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:30:45.928 19:32:53 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:45.928 19:32:53 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:45.928 rmmod nvme_tcp 00:30:45.928 rmmod nvme_fabrics 00:30:45.928 rmmod nvme_keyring 00:30:45.928 19:32:53 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:45.928 19:32:53 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:30:45.928 19:32:53 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:30:45.928 19:32:53 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 83100 ']' 00:30:45.928 19:32:53 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 83100 00:30:45.928 19:32:53 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 83100 ']' 00:30:45.928 19:32:53 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 83100 00:30:45.928 19:32:53 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:30:45.928 19:32:53 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:45.928 19:32:53 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83100 00:30:45.928 19:32:53 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:45.928 19:32:53 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:45.928 killing process with pid 83100 00:30:45.928 19:32:53 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83100' 00:30:45.928 19:32:53 nvmf_dif -- common/autotest_common.sh@969 -- # kill 83100 00:30:45.928 19:32:53 nvmf_dif -- common/autotest_common.sh@974 -- # wait 83100 00:30:45.928 19:32:54 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:30:45.928 19:32:54 nvmf_dif -- nvmf/common.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:45.928 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:45.928 Waiting for block devices as requested 00:30:45.928 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:45.928 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:45.928 19:32:54 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:45.928 19:32:54 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:45.928 19:32:54 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:30:45.928 19:32:54 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:30:45.928 19:32:54 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:45.928 19:32:54 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:30:45.928 19:32:54 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:45.928 19:32:54 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:45.928 19:32:54 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:45.928 19:32:54 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:45.928 19:32:54 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:45.928 19:32:54 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:45.928 19:32:54 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:45.928 19:32:54 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:45.928 19:32:54 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:45.928 19:32:54 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:45.928 19:32:54 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:45.928 19:32:54 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:45.928 19:32:54 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:45.928 19:32:54 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:45.928 19:32:54 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:45.928 19:32:54 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:45.928 19:32:54 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.928 19:32:54 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:45.928 19:32:54 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:45.928 19:32:55 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:30:45.928 00:30:45.928 real 1m0.251s 00:30:45.928 user 3m48.993s 00:30:45.928 sys 0m18.859s 00:30:45.928 19:32:55 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:45.928 ************************************ 00:30:45.928 END TEST nvmf_dif 00:30:45.928 ************************************ 00:30:45.928 19:32:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:45.928 19:32:55 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:45.928 19:32:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:45.928 19:32:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:45.928 19:32:55 -- common/autotest_common.sh@10 -- # set +x 00:30:45.928 ************************************ 00:30:45.928 START TEST nvmf_abort_qd_sizes 00:30:45.928 ************************************ 00:30:45.928 19:32:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:45.928 * Looking for test storage... 00:30:45.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:45.928 19:32:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:45.928 19:32:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:30:45.928 19:32:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:46.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.189 --rc genhtml_branch_coverage=1 00:30:46.189 --rc genhtml_function_coverage=1 00:30:46.189 --rc genhtml_legend=1 00:30:46.189 --rc geninfo_all_blocks=1 00:30:46.189 --rc geninfo_unexecuted_blocks=1 00:30:46.189 00:30:46.189 ' 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:46.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.189 --rc genhtml_branch_coverage=1 00:30:46.189 --rc genhtml_function_coverage=1 00:30:46.189 --rc genhtml_legend=1 00:30:46.189 --rc geninfo_all_blocks=1 00:30:46.189 --rc geninfo_unexecuted_blocks=1 00:30:46.189 00:30:46.189 ' 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:46.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.189 --rc genhtml_branch_coverage=1 00:30:46.189 --rc genhtml_function_coverage=1 00:30:46.189 --rc genhtml_legend=1 00:30:46.189 --rc geninfo_all_blocks=1 00:30:46.189 --rc geninfo_unexecuted_blocks=1 00:30:46.189 00:30:46.189 ' 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:46.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.189 --rc genhtml_branch_coverage=1 00:30:46.189 --rc genhtml_function_coverage=1 00:30:46.189 --rc genhtml_legend=1 00:30:46.189 --rc geninfo_all_blocks=1 00:30:46.189 --rc geninfo_unexecuted_blocks=1 00:30:46.189 00:30:46.189 ' 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:46.189 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:30:46.189 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@458 -- # nvmf_veth_init 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:46.190 Cannot find device "nvmf_init_br" 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:46.190 Cannot find device "nvmf_init_br2" 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:46.190 Cannot find device "nvmf_tgt_br" 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:46.190 Cannot find device "nvmf_tgt_br2" 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:46.190 Cannot find device "nvmf_init_br" 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:46.190 Cannot find device "nvmf_init_br2" 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:46.190 Cannot find device "nvmf_tgt_br" 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:46.190 Cannot find device "nvmf_tgt_br2" 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:46.190 Cannot find device "nvmf_br" 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:46.190 Cannot find device "nvmf_init_if" 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:46.190 Cannot find device "nvmf_init_if2" 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:46.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:46.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:30:46.190 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:46.447 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:46.447 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:46.447 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:46.447 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:46.447 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:46.447 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:46.447 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:46.447 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:46.447 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:46.447 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:46.447 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:46.447 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:46.447 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:46.447 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:46.447 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:46.447 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:46.447 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:46.447 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:46.447 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:46.447 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:46.447 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:46.447 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:46.447 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:46.447 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:46.447 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:46.447 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:46.447 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:46.447 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:46.448 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:46.704 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:46.704 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:46.704 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:46.704 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:46.704 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:30:46.704 00:30:46.704 --- 10.0.0.3 ping statistics --- 00:30:46.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.704 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:30:46.704 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:46.704 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:46.704 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:30:46.704 00:30:46.704 --- 10.0.0.4 ping statistics --- 00:30:46.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.704 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:30:46.704 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:46.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:46.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:30:46.704 00:30:46.704 --- 10.0.0.1 ping statistics --- 00:30:46.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.704 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:30:46.704 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:46.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:46.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:30:46.704 00:30:46.704 --- 10.0.0.2 ping statistics --- 00:30:46.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.704 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:30:46.704 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:46.704 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # return 0 00:30:46.704 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:30:46.704 19:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:47.269 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:47.269 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:47.269 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:47.528 19:32:56 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:47.528 19:32:56 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:47.528 19:32:56 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:47.528 19:32:56 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:47.528 19:32:56 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:47.528 19:32:56 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:47.528 19:32:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:30:47.528 19:32:56 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:47.528 19:32:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:47.528 19:32:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:47.528 19:32:56 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=84507 00:30:47.528 19:32:56 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:30:47.528 19:32:56 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 84507 00:30:47.528 19:32:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 84507 ']' 00:30:47.528 19:32:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:47.528 19:32:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:47.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:47.528 19:32:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:47.528 19:32:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:47.528 19:32:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:47.528 [2024-10-17 19:32:56.672638] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:30:47.528 [2024-10-17 19:32:56.672987] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:47.786 [2024-10-17 19:32:56.811035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:47.786 [2024-10-17 19:32:56.883680] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:47.786 [2024-10-17 19:32:56.883974] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:47.786 [2024-10-17 19:32:56.884201] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:47.786 [2024-10-17 19:32:56.884452] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:47.786 [2024-10-17 19:32:56.884557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:47.786 [2024-10-17 19:32:56.885764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:47.786 [2024-10-17 19:32:56.885892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:47.786 [2024-10-17 19:32:56.886047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:47.786 [2024-10-17 19:32:56.886059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:47.786 [2024-10-17 19:32:56.940654] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:48.723 19:32:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:48.723 ************************************ 00:30:48.723 START TEST spdk_target_abort 00:30:48.723 ************************************ 00:30:48.723 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:30:48.723 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:30:48.723 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:30:48.723 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.723 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:48.723 spdk_targetn1 00:30:48.723 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.723 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:48.723 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.723 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:48.723 [2024-10-17 19:32:57.887932] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:48.723 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.723 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:30:48.723 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.723 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:48.723 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.723 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:30:48.723 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.723 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:48.723 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.723 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:30:48.723 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.723 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:48.723 [2024-10-17 19:32:57.923415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:48.723 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.723 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:30:48.723 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:48.723 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:48.723 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:30:48.723 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:48.723 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:48.724 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:48.724 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:48.724 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:48.724 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:48.724 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:48.724 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:48.724 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:48.724 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:48.724 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:30:48.724 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:48.724 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:30:48.724 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:48.724 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:48.724 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:48.724 19:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:52.007 Initializing NVMe Controllers 00:30:52.007 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:30:52.007 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:52.007 Initialization complete. Launching workers. 00:30:52.007 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10406, failed: 0 00:30:52.007 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1034, failed to submit 9372 00:30:52.007 success 776, unsuccessful 258, failed 0 00:30:52.007 19:33:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:52.007 19:33:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:56.195 Initializing NVMe Controllers 00:30:56.195 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:30:56.195 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:56.195 Initialization complete. Launching workers. 00:30:56.195 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8910, failed: 0 00:30:56.195 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1164, failed to submit 7746 00:30:56.195 success 352, unsuccessful 812, failed 0 00:30:56.195 19:33:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:56.195 19:33:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:58.733 Initializing NVMe Controllers 00:30:58.733 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:30:58.733 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:58.733 Initialization complete. Launching workers. 00:30:58.733 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31478, failed: 0 00:30:58.733 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2238, failed to submit 29240 00:30:58.733 success 413, unsuccessful 1825, failed 0 00:30:58.733 19:33:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:30:58.733 19:33:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.733 19:33:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:58.733 19:33:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.733 19:33:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:30:58.733 19:33:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.733 19:33:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:59.300 19:33:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.300 19:33:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84507 00:30:59.300 19:33:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 84507 ']' 00:30:59.300 19:33:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 84507 00:30:59.300 19:33:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:30:59.300 19:33:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:59.300 19:33:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84507 00:30:59.300 killing process with pid 84507 00:30:59.300 19:33:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:59.300 19:33:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:59.300 19:33:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84507' 00:30:59.300 19:33:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 84507 00:30:59.300 19:33:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 84507 00:30:59.558 ************************************ 00:30:59.558 END TEST spdk_target_abort 00:30:59.558 ************************************ 00:30:59.558 00:30:59.558 real 0m10.781s 00:30:59.558 user 0m44.244s 00:30:59.558 sys 0m2.121s 00:30:59.558 19:33:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:59.558 19:33:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:59.558 19:33:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:30:59.558 19:33:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:59.558 19:33:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:59.558 19:33:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:59.558 ************************************ 00:30:59.558 START TEST kernel_target_abort 00:30:59.558 ************************************ 00:30:59.558 19:33:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:30:59.558 19:33:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:30:59.558 19:33:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:30:59.558 19:33:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:59.558 19:33:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:59.558 19:33:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:59.558 19:33:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:59.558 19:33:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:59.558 19:33:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:59.558 19:33:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:59.558 19:33:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:59.558 19:33:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:59.558 19:33:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:59.558 19:33:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:59.558 19:33:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:30:59.558 19:33:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:59.559 19:33:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:59.559 19:33:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:59.559 19:33:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:30:59.559 19:33:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:30:59.559 19:33:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:30:59.559 19:33:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:59.559 19:33:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:59.816 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:59.816 Waiting for block devices as requested 00:31:00.074 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:00.074 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:00.074 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:31:00.074 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:00.074 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:31:00.074 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:31:00.074 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:00.074 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:00.074 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:31:00.074 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:31:00.074 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:31:00.074 No valid GPT data, bailing 00:31:00.333 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n2 ]] 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n2 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n2 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:31:00.334 No valid GPT data, bailing 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n2 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n3 ]] 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n3 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n3 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:31:00.334 No valid GPT data, bailing 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n3 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme1n1 ]] 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme1n1 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme1n1 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:31:00.334 No valid GPT data, bailing 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme1n1 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme1n1 ]] 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:00.334 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme1n1 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b --hostid=cb4c864e-bb30-4900-8fc1-989c4e76fc1b -a 10.0.0.1 -t tcp -s 4420 00:31:00.593 00:31:00.593 Discovery Log Number of Records 2, Generation counter 2 00:31:00.593 =====Discovery Log Entry 0====== 00:31:00.593 trtype: tcp 00:31:00.593 adrfam: ipv4 00:31:00.593 subtype: current discovery subsystem 00:31:00.593 treq: not specified, sq flow control disable supported 00:31:00.593 portid: 1 00:31:00.593 trsvcid: 4420 00:31:00.593 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:00.593 traddr: 10.0.0.1 00:31:00.593 eflags: none 00:31:00.593 sectype: none 00:31:00.593 =====Discovery Log Entry 1====== 00:31:00.593 trtype: tcp 00:31:00.593 adrfam: ipv4 00:31:00.593 subtype: nvme subsystem 00:31:00.593 treq: not specified, sq flow control disable supported 00:31:00.593 portid: 1 00:31:00.593 trsvcid: 4420 00:31:00.593 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:00.593 traddr: 10.0.0.1 00:31:00.593 eflags: none 00:31:00.593 sectype: none 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:00.593 19:33:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:03.876 Initializing NVMe Controllers 00:31:03.876 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:03.876 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:03.876 Initialization complete. Launching workers. 00:31:03.876 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34602, failed: 0 00:31:03.876 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34602, failed to submit 0 00:31:03.876 success 0, unsuccessful 34602, failed 0 00:31:03.876 19:33:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:03.876 19:33:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:07.157 Initializing NVMe Controllers 00:31:07.157 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:07.157 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:07.157 Initialization complete. Launching workers. 00:31:07.157 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66827, failed: 0 00:31:07.158 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29154, failed to submit 37673 00:31:07.158 success 0, unsuccessful 29154, failed 0 00:31:07.158 19:33:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:07.158 19:33:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:10.503 Initializing NVMe Controllers 00:31:10.503 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:10.503 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:10.503 Initialization complete. Launching workers. 00:31:10.503 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 76221, failed: 0 00:31:10.503 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19078, failed to submit 57143 00:31:10.503 success 0, unsuccessful 19078, failed 0 00:31:10.503 19:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:31:10.503 19:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:10.503 19:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:31:10.503 19:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:10.503 19:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:10.503 19:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:10.503 19:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:10.503 19:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:31:10.503 19:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:31:10.503 19:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:10.762 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:12.663 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:31:12.663 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:31:12.663 00:31:12.663 real 0m12.950s 00:31:12.663 user 0m6.391s 00:31:12.663 sys 0m3.969s 00:31:12.663 19:33:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:12.663 19:33:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:12.663 ************************************ 00:31:12.663 END TEST kernel_target_abort 00:31:12.663 ************************************ 00:31:12.663 19:33:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:12.663 19:33:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:31:12.663 19:33:21 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:12.663 19:33:21 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:31:12.663 19:33:21 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:12.663 19:33:21 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:31:12.663 19:33:21 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:12.663 19:33:21 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:12.663 rmmod nvme_tcp 00:31:12.663 rmmod nvme_fabrics 00:31:12.663 rmmod nvme_keyring 00:31:12.663 19:33:21 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:12.663 19:33:21 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:31:12.663 19:33:21 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:31:12.663 19:33:21 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 84507 ']' 00:31:12.663 19:33:21 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 84507 00:31:12.663 19:33:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 84507 ']' 00:31:12.663 19:33:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 84507 00:31:12.663 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (84507) - No such process 00:31:12.663 Process with pid 84507 is not found 00:31:12.663 19:33:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 84507 is not found' 00:31:12.663 19:33:21 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:31:12.663 19:33:21 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:12.922 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:12.922 Waiting for block devices as requested 00:31:12.922 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:13.180 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:13.180 19:33:22 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:13.180 19:33:22 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:13.180 19:33:22 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:31:13.180 19:33:22 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:31:13.180 19:33:22 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:13.180 19:33:22 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:31:13.180 19:33:22 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:13.180 19:33:22 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:31:13.180 19:33:22 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:31:13.180 19:33:22 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:31:13.180 19:33:22 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:31:13.180 19:33:22 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:31:13.180 19:33:22 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:31:13.180 19:33:22 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:31:13.180 19:33:22 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:31:13.180 19:33:22 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:31:13.180 19:33:22 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:31:13.438 19:33:22 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:31:13.438 19:33:22 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:31:13.438 19:33:22 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:13.438 19:33:22 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:13.438 19:33:22 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:31:13.438 19:33:22 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.438 19:33:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:13.438 19:33:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:13.438 19:33:22 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:31:13.438 00:31:13.438 real 0m27.510s 00:31:13.438 user 0m52.049s 00:31:13.438 sys 0m7.599s 00:31:13.438 19:33:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:13.438 19:33:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:13.438 ************************************ 00:31:13.438 END TEST nvmf_abort_qd_sizes 00:31:13.438 ************************************ 00:31:13.438 19:33:22 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:31:13.438 19:33:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:13.438 19:33:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:13.438 19:33:22 -- common/autotest_common.sh@10 -- # set +x 00:31:13.438 ************************************ 00:31:13.438 START TEST keyring_file 00:31:13.438 ************************************ 00:31:13.438 19:33:22 keyring_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:31:13.697 * Looking for test storage... 00:31:13.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:31:13.697 19:33:22 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:13.697 19:33:22 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:31:13.697 19:33:22 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:13.697 19:33:22 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@345 -- # : 1 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@353 -- # local d=1 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@355 -- # echo 1 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@353 -- # local d=2 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@355 -- # echo 2 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@368 -- # return 0 00:31:13.697 19:33:22 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:13.697 19:33:22 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:13.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.697 --rc genhtml_branch_coverage=1 00:31:13.697 --rc genhtml_function_coverage=1 00:31:13.697 --rc genhtml_legend=1 00:31:13.697 --rc geninfo_all_blocks=1 00:31:13.697 --rc geninfo_unexecuted_blocks=1 00:31:13.697 00:31:13.697 ' 00:31:13.697 19:33:22 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:13.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.697 --rc genhtml_branch_coverage=1 00:31:13.697 --rc genhtml_function_coverage=1 00:31:13.697 --rc genhtml_legend=1 00:31:13.697 --rc geninfo_all_blocks=1 00:31:13.697 --rc geninfo_unexecuted_blocks=1 00:31:13.697 00:31:13.697 ' 00:31:13.697 19:33:22 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:13.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.697 --rc genhtml_branch_coverage=1 00:31:13.697 --rc genhtml_function_coverage=1 00:31:13.697 --rc genhtml_legend=1 00:31:13.697 --rc geninfo_all_blocks=1 00:31:13.697 --rc geninfo_unexecuted_blocks=1 00:31:13.697 00:31:13.697 ' 00:31:13.697 19:33:22 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:13.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.697 --rc genhtml_branch_coverage=1 00:31:13.697 --rc genhtml_function_coverage=1 00:31:13.697 --rc genhtml_legend=1 00:31:13.697 --rc geninfo_all_blocks=1 00:31:13.697 --rc geninfo_unexecuted_blocks=1 00:31:13.697 00:31:13.697 ' 00:31:13.697 19:33:22 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:31:13.697 19:33:22 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:13.697 19:33:22 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:31:13.697 19:33:22 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:13.697 19:33:22 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:13.697 19:33:22 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:13.697 19:33:22 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:13.697 19:33:22 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:13.697 19:33:22 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:13.697 19:33:22 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:13.697 19:33:22 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:13.697 19:33:22 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:13.697 19:33:22 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:13.697 19:33:22 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:31:13.697 19:33:22 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:31:13.697 19:33:22 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:13.697 19:33:22 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:13.697 19:33:22 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:13.697 19:33:22 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:13.697 19:33:22 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:13.697 19:33:22 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:13.697 19:33:22 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.697 19:33:22 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.697 19:33:22 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.697 19:33:22 keyring_file -- paths/export.sh@5 -- # export PATH 00:31:13.697 19:33:22 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.697 19:33:22 keyring_file -- nvmf/common.sh@51 -- # : 0 00:31:13.697 19:33:22 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:13.697 19:33:22 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:13.697 19:33:22 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:13.697 19:33:22 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:13.697 19:33:22 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:13.697 19:33:22 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:13.697 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:13.697 19:33:22 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:13.697 19:33:22 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:13.697 19:33:22 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:13.697 19:33:22 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:13.697 19:33:22 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:13.697 19:33:22 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:13.697 19:33:22 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:31:13.697 19:33:22 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:31:13.697 19:33:22 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:31:13.697 19:33:22 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:13.697 19:33:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:13.697 19:33:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:13.697 19:33:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:13.697 19:33:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:13.697 19:33:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:13.697 19:33:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.epJVrnS6DZ 00:31:13.697 19:33:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:13.697 19:33:22 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:13.698 19:33:22 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:31:13.698 19:33:22 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:31:13.698 19:33:22 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:31:13.698 19:33:22 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:31:13.698 19:33:22 keyring_file -- nvmf/common.sh@731 -- # python - 00:31:13.698 19:33:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.epJVrnS6DZ 00:31:13.698 19:33:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.epJVrnS6DZ 00:31:13.698 19:33:22 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.epJVrnS6DZ 00:31:13.698 19:33:22 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:31:13.698 19:33:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:13.698 19:33:22 keyring_file -- keyring/common.sh@17 -- # name=key1 00:31:13.698 19:33:22 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:13.698 19:33:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:13.698 19:33:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:13.698 19:33:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.v1LovvYxMa 00:31:13.698 19:33:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:13.698 19:33:22 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:13.698 19:33:22 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:31:13.698 19:33:22 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:31:13.698 19:33:22 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:31:13.698 19:33:22 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:31:13.698 19:33:22 keyring_file -- nvmf/common.sh@731 -- # python - 00:31:13.956 19:33:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.v1LovvYxMa 00:31:13.956 19:33:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.v1LovvYxMa 00:31:13.956 19:33:22 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.v1LovvYxMa 00:31:13.956 19:33:22 keyring_file -- keyring/file.sh@30 -- # tgtpid=85422 00:31:13.956 19:33:22 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:13.956 19:33:22 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85422 00:31:13.956 19:33:22 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 85422 ']' 00:31:13.956 19:33:22 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.956 19:33:22 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:13.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.956 19:33:22 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.956 19:33:22 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:13.956 19:33:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:13.956 [2024-10-17 19:33:23.061155] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:31:13.956 [2024-10-17 19:33:23.061282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85422 ] 00:31:13.956 [2024-10-17 19:33:23.201099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.244 [2024-10-17 19:33:23.270284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.244 [2024-10-17 19:33:23.347676] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:14.504 19:33:23 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:14.504 19:33:23 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:31:14.504 19:33:23 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:31:14.504 19:33:23 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.504 19:33:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:14.504 [2024-10-17 19:33:23.564478] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.504 null0 00:31:14.504 [2024-10-17 19:33:23.596428] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:14.504 [2024-10-17 19:33:23.596767] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:14.504 19:33:23 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.504 19:33:23 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:14.504 19:33:23 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:14.504 19:33:23 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:14.504 19:33:23 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:14.504 19:33:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:14.504 19:33:23 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:14.504 19:33:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:14.504 19:33:23 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:14.504 19:33:23 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.504 19:33:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:14.504 [2024-10-17 19:33:23.628457] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:31:14.504 request: 00:31:14.504 { 00:31:14.504 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:31:14.504 "secure_channel": false, 00:31:14.504 "listen_address": { 00:31:14.504 "trtype": "tcp", 00:31:14.504 "traddr": "127.0.0.1", 00:31:14.504 "trsvcid": "4420" 00:31:14.504 }, 00:31:14.504 "method": "nvmf_subsystem_add_listener", 00:31:14.504 "req_id": 1 00:31:14.504 } 00:31:14.504 Got JSON-RPC error response 00:31:14.504 response: 00:31:14.504 { 00:31:14.504 "code": -32602, 00:31:14.504 "message": "Invalid parameters" 00:31:14.504 } 00:31:14.504 19:33:23 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:14.504 19:33:23 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:14.504 19:33:23 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:14.504 19:33:23 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:14.504 19:33:23 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:14.504 19:33:23 keyring_file -- keyring/file.sh@47 -- # bperfpid=85433 00:31:14.504 19:33:23 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:31:14.504 19:33:23 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85433 /var/tmp/bperf.sock 00:31:14.504 19:33:23 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 85433 ']' 00:31:14.504 19:33:23 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:14.504 19:33:23 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:14.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:14.504 19:33:23 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:14.504 19:33:23 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:14.504 19:33:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:14.504 [2024-10-17 19:33:23.698288] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:31:14.504 [2024-10-17 19:33:23.698415] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85433 ] 00:31:14.762 [2024-10-17 19:33:23.840196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.762 [2024-10-17 19:33:23.905663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:14.762 [2024-10-17 19:33:23.963299] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:15.021 19:33:24 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:15.021 19:33:24 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:31:15.021 19:33:24 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.epJVrnS6DZ 00:31:15.021 19:33:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.epJVrnS6DZ 00:31:15.279 19:33:24 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.v1LovvYxMa 00:31:15.279 19:33:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.v1LovvYxMa 00:31:15.536 19:33:24 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:31:15.536 19:33:24 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:31:15.536 19:33:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:15.536 19:33:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:15.536 19:33:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:15.794 19:33:25 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.epJVrnS6DZ == \/\t\m\p\/\t\m\p\.\e\p\J\V\r\n\S\6\D\Z ]] 00:31:15.794 19:33:25 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:31:15.794 19:33:25 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:31:15.794 19:33:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:15.794 19:33:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:15.794 19:33:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:16.360 19:33:25 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.v1LovvYxMa == \/\t\m\p\/\t\m\p\.\v\1\L\o\v\v\Y\x\M\a ]] 00:31:16.360 19:33:25 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:31:16.360 19:33:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:16.360 19:33:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:16.360 19:33:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:16.360 19:33:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:16.360 19:33:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:16.360 19:33:25 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:31:16.360 19:33:25 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:31:16.360 19:33:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:16.360 19:33:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:16.360 19:33:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:16.360 19:33:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:16.360 19:33:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:16.958 19:33:25 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:31:16.958 19:33:25 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:16.958 19:33:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:16.958 [2024-10-17 19:33:26.130976] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:17.219 nvme0n1 00:31:17.219 19:33:26 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:31:17.219 19:33:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:17.219 19:33:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:17.219 19:33:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:17.219 19:33:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:17.219 19:33:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:17.477 19:33:26 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:31:17.477 19:33:26 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:31:17.477 19:33:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:17.477 19:33:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:17.477 19:33:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:17.477 19:33:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:17.477 19:33:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:17.735 19:33:26 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:31:17.735 19:33:26 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:17.735 Running I/O for 1 seconds... 00:31:18.668 9769.00 IOPS, 38.16 MiB/s 00:31:18.668 Latency(us) 00:31:18.668 [2024-10-17T19:33:27.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:18.668 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:31:18.668 nvme0n1 : 1.01 9772.27 38.17 0.00 0.00 13012.45 5957.82 17873.45 00:31:18.668 [2024-10-17T19:33:27.926Z] =================================================================================================================== 00:31:18.668 [2024-10-17T19:33:27.926Z] Total : 9772.27 38.17 0.00 0.00 13012.45 5957.82 17873.45 00:31:18.668 { 00:31:18.668 "results": [ 00:31:18.668 { 00:31:18.668 "job": "nvme0n1", 00:31:18.668 "core_mask": "0x2", 00:31:18.668 "workload": "randrw", 00:31:18.668 "percentage": 50, 00:31:18.668 "status": "finished", 00:31:18.668 "queue_depth": 128, 00:31:18.668 "io_size": 4096, 00:31:18.668 "runtime": 1.012764, 00:31:18.668 "iops": 9772.266786734126, 00:31:18.668 "mibps": 38.17291713568018, 00:31:18.668 "io_failed": 0, 00:31:18.668 "io_timeout": 0, 00:31:18.668 "avg_latency_us": 13012.45100609, 00:31:18.668 "min_latency_us": 5957.818181818182, 00:31:18.668 "max_latency_us": 17873.454545454544 00:31:18.668 } 00:31:18.668 ], 00:31:18.668 "core_count": 1 00:31:18.668 } 00:31:18.668 19:33:27 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:18.668 19:33:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:19.235 19:33:28 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:31:19.235 19:33:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:19.235 19:33:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:19.235 19:33:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:19.235 19:33:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:19.235 19:33:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:19.493 19:33:28 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:31:19.493 19:33:28 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:31:19.493 19:33:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:19.493 19:33:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:19.493 19:33:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:19.493 19:33:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:19.493 19:33:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:19.750 19:33:28 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:31:19.750 19:33:28 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:19.750 19:33:28 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:19.750 19:33:28 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:19.750 19:33:28 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:31:19.750 19:33:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:19.750 19:33:28 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:31:19.750 19:33:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:19.750 19:33:28 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:19.750 19:33:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:20.007 [2024-10-17 19:33:29.222472] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:20.007 [2024-10-17 19:33:29.223364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdb9e0 (107): Transport endpoint is not connected 00:31:20.007 [2024-10-17 19:33:29.224352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdb9e0 (9): Bad file descriptor 00:31:20.007 [2024-10-17 19:33:29.225348] nvme_ctrlr.c:4250:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:20.007 [2024-10-17 19:33:29.225371] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:20.007 [2024-10-17 19:33:29.225383] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:31:20.007 [2024-10-17 19:33:29.225395] nvme_ctrlr.c:1152:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:20.007 request: 00:31:20.007 { 00:31:20.007 "name": "nvme0", 00:31:20.007 "trtype": "tcp", 00:31:20.007 "traddr": "127.0.0.1", 00:31:20.007 "adrfam": "ipv4", 00:31:20.007 "trsvcid": "4420", 00:31:20.007 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:20.007 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:20.007 "prchk_reftag": false, 00:31:20.007 "prchk_guard": false, 00:31:20.007 "hdgst": false, 00:31:20.007 "ddgst": false, 00:31:20.007 "psk": "key1", 00:31:20.007 "allow_unrecognized_csi": false, 00:31:20.007 "method": "bdev_nvme_attach_controller", 00:31:20.007 "req_id": 1 00:31:20.007 } 00:31:20.007 Got JSON-RPC error response 00:31:20.007 response: 00:31:20.007 { 00:31:20.007 "code": -5, 00:31:20.007 "message": "Input/output error" 00:31:20.007 } 00:31:20.007 19:33:29 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:20.007 19:33:29 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:20.007 19:33:29 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:20.007 19:33:29 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:20.007 19:33:29 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:31:20.007 19:33:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:20.007 19:33:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:20.007 19:33:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:20.007 19:33:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:20.007 19:33:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:20.572 19:33:29 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:31:20.572 19:33:29 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:31:20.572 19:33:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:20.572 19:33:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:20.572 19:33:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:20.572 19:33:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:20.572 19:33:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:20.831 19:33:29 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:31:20.831 19:33:29 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:31:20.831 19:33:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:21.089 19:33:30 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:31:21.089 19:33:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:31:21.346 19:33:30 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:31:21.346 19:33:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:21.346 19:33:30 keyring_file -- keyring/file.sh@78 -- # jq length 00:31:21.604 19:33:30 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:31:21.604 19:33:30 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.epJVrnS6DZ 00:31:21.604 19:33:30 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.epJVrnS6DZ 00:31:21.604 19:33:30 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:21.604 19:33:30 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.epJVrnS6DZ 00:31:21.604 19:33:30 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:31:21.604 19:33:30 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:21.604 19:33:30 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:31:21.604 19:33:30 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:21.604 19:33:30 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.epJVrnS6DZ 00:31:21.604 19:33:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.epJVrnS6DZ 00:31:21.862 [2024-10-17 19:33:31.034517] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.epJVrnS6DZ': 0100660 00:31:21.862 [2024-10-17 19:33:31.034571] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:31:21.862 request: 00:31:21.862 { 00:31:21.862 "name": "key0", 00:31:21.862 "path": "/tmp/tmp.epJVrnS6DZ", 00:31:21.862 "method": "keyring_file_add_key", 00:31:21.862 "req_id": 1 00:31:21.862 } 00:31:21.862 Got JSON-RPC error response 00:31:21.862 response: 00:31:21.862 { 00:31:21.862 "code": -1, 00:31:21.862 "message": "Operation not permitted" 00:31:21.862 } 00:31:21.862 19:33:31 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:21.862 19:33:31 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:21.862 19:33:31 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:21.862 19:33:31 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:21.862 19:33:31 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.epJVrnS6DZ 00:31:21.862 19:33:31 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.epJVrnS6DZ 00:31:21.862 19:33:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.epJVrnS6DZ 00:31:22.121 19:33:31 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.epJVrnS6DZ 00:31:22.121 19:33:31 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:31:22.379 19:33:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:22.379 19:33:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:22.379 19:33:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:22.379 19:33:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:22.379 19:33:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:22.694 19:33:31 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:31:22.694 19:33:31 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:22.694 19:33:31 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:22.694 19:33:31 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:22.694 19:33:31 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:31:22.694 19:33:31 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:22.694 19:33:31 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:31:22.694 19:33:31 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:22.694 19:33:31 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:22.694 19:33:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:22.694 [2024-10-17 19:33:31.922732] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.epJVrnS6DZ': No such file or directory 00:31:22.694 [2024-10-17 19:33:31.922789] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:31:22.694 [2024-10-17 19:33:31.922813] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:31:22.694 [2024-10-17 19:33:31.922824] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:31:22.694 [2024-10-17 19:33:31.922835] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:22.694 [2024-10-17 19:33:31.922844] bdev_nvme.c:6545:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:31:22.694 request: 00:31:22.694 { 00:31:22.694 "name": "nvme0", 00:31:22.694 "trtype": "tcp", 00:31:22.694 "traddr": "127.0.0.1", 00:31:22.694 "adrfam": "ipv4", 00:31:22.694 "trsvcid": "4420", 00:31:22.694 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:22.694 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:22.694 "prchk_reftag": false, 00:31:22.694 "prchk_guard": false, 00:31:22.694 "hdgst": false, 00:31:22.694 "ddgst": false, 00:31:22.694 "psk": "key0", 00:31:22.694 "allow_unrecognized_csi": false, 00:31:22.694 "method": "bdev_nvme_attach_controller", 00:31:22.694 "req_id": 1 00:31:22.694 } 00:31:22.694 Got JSON-RPC error response 00:31:22.694 response: 00:31:22.694 { 00:31:22.694 "code": -19, 00:31:22.694 "message": "No such device" 00:31:22.694 } 00:31:22.694 19:33:31 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:22.694 19:33:31 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:22.694 19:33:31 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:22.694 19:33:31 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:22.694 19:33:31 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:31:22.694 19:33:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:22.952 19:33:32 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:22.952 19:33:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:22.952 19:33:32 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:22.952 19:33:32 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:22.952 19:33:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:22.952 19:33:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:22.952 19:33:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.6T78oKH2Kt 00:31:22.952 19:33:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:22.952 19:33:32 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:22.952 19:33:32 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:31:22.952 19:33:32 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:31:22.952 19:33:32 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:31:22.952 19:33:32 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:31:22.952 19:33:32 keyring_file -- nvmf/common.sh@731 -- # python - 00:31:23.210 19:33:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.6T78oKH2Kt 00:31:23.210 19:33:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.6T78oKH2Kt 00:31:23.210 19:33:32 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.6T78oKH2Kt 00:31:23.210 19:33:32 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6T78oKH2Kt 00:31:23.210 19:33:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6T78oKH2Kt 00:31:23.468 19:33:32 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:23.468 19:33:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:23.726 nvme0n1 00:31:23.726 19:33:32 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:31:23.726 19:33:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:23.726 19:33:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:23.726 19:33:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:23.726 19:33:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:23.726 19:33:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:23.984 19:33:33 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:31:23.984 19:33:33 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:31:23.984 19:33:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:24.551 19:33:33 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:31:24.551 19:33:33 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:31:24.551 19:33:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:24.551 19:33:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:24.551 19:33:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:24.809 19:33:33 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:31:24.809 19:33:33 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:31:24.809 19:33:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:24.809 19:33:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:24.809 19:33:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:24.809 19:33:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:24.809 19:33:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:25.066 19:33:34 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:31:25.066 19:33:34 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:25.066 19:33:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:25.324 19:33:34 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:31:25.324 19:33:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:25.324 19:33:34 keyring_file -- keyring/file.sh@105 -- # jq length 00:31:25.581 19:33:34 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:31:25.581 19:33:34 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6T78oKH2Kt 00:31:25.582 19:33:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6T78oKH2Kt 00:31:25.840 19:33:35 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.v1LovvYxMa 00:31:25.840 19:33:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.v1LovvYxMa 00:31:26.406 19:33:35 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:26.406 19:33:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:26.663 nvme0n1 00:31:26.663 19:33:35 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:31:26.663 19:33:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:31:27.227 19:33:36 keyring_file -- keyring/file.sh@113 -- # config='{ 00:31:27.227 "subsystems": [ 00:31:27.227 { 00:31:27.227 "subsystem": "keyring", 00:31:27.227 "config": [ 00:31:27.227 { 00:31:27.227 "method": "keyring_file_add_key", 00:31:27.227 "params": { 00:31:27.227 "name": "key0", 00:31:27.227 "path": "/tmp/tmp.6T78oKH2Kt" 00:31:27.227 } 00:31:27.227 }, 00:31:27.227 { 00:31:27.227 "method": "keyring_file_add_key", 00:31:27.227 "params": { 00:31:27.227 "name": "key1", 00:31:27.227 "path": "/tmp/tmp.v1LovvYxMa" 00:31:27.227 } 00:31:27.227 } 00:31:27.227 ] 00:31:27.227 }, 00:31:27.227 { 00:31:27.227 "subsystem": "iobuf", 00:31:27.227 "config": [ 00:31:27.227 { 00:31:27.227 "method": "iobuf_set_options", 00:31:27.227 "params": { 00:31:27.228 "small_pool_count": 8192, 00:31:27.228 "large_pool_count": 1024, 00:31:27.228 "small_bufsize": 8192, 00:31:27.228 "large_bufsize": 135168 00:31:27.228 } 00:31:27.228 } 00:31:27.228 ] 00:31:27.228 }, 00:31:27.228 { 00:31:27.228 "subsystem": "sock", 00:31:27.228 "config": [ 00:31:27.228 { 00:31:27.228 "method": "sock_set_default_impl", 00:31:27.228 "params": { 00:31:27.228 "impl_name": "uring" 00:31:27.228 } 00:31:27.228 }, 00:31:27.228 { 00:31:27.228 "method": "sock_impl_set_options", 00:31:27.228 "params": { 00:31:27.228 "impl_name": "ssl", 00:31:27.228 "recv_buf_size": 4096, 00:31:27.228 "send_buf_size": 4096, 00:31:27.228 "enable_recv_pipe": true, 00:31:27.228 "enable_quickack": false, 00:31:27.228 "enable_placement_id": 0, 00:31:27.228 "enable_zerocopy_send_server": true, 00:31:27.228 "enable_zerocopy_send_client": false, 00:31:27.228 "zerocopy_threshold": 0, 00:31:27.228 "tls_version": 0, 00:31:27.228 "enable_ktls": false 00:31:27.228 } 00:31:27.228 }, 00:31:27.228 { 00:31:27.228 "method": "sock_impl_set_options", 00:31:27.228 "params": { 00:31:27.228 "impl_name": "posix", 00:31:27.228 "recv_buf_size": 2097152, 00:31:27.228 "send_buf_size": 2097152, 00:31:27.228 "enable_recv_pipe": true, 00:31:27.228 "enable_quickack": false, 00:31:27.228 "enable_placement_id": 0, 00:31:27.228 "enable_zerocopy_send_server": true, 00:31:27.228 "enable_zerocopy_send_client": false, 00:31:27.228 "zerocopy_threshold": 0, 00:31:27.228 "tls_version": 0, 00:31:27.228 "enable_ktls": false 00:31:27.228 } 00:31:27.228 }, 00:31:27.228 { 00:31:27.228 "method": "sock_impl_set_options", 00:31:27.228 "params": { 00:31:27.228 "impl_name": "uring", 00:31:27.228 "recv_buf_size": 2097152, 00:31:27.228 "send_buf_size": 2097152, 00:31:27.228 "enable_recv_pipe": true, 00:31:27.228 "enable_quickack": false, 00:31:27.228 "enable_placement_id": 0, 00:31:27.228 "enable_zerocopy_send_server": false, 00:31:27.228 "enable_zerocopy_send_client": false, 00:31:27.228 "zerocopy_threshold": 0, 00:31:27.228 "tls_version": 0, 00:31:27.228 "enable_ktls": false 00:31:27.228 } 00:31:27.228 } 00:31:27.228 ] 00:31:27.228 }, 00:31:27.228 { 00:31:27.228 "subsystem": "vmd", 00:31:27.228 "config": [] 00:31:27.228 }, 00:31:27.228 { 00:31:27.228 "subsystem": "accel", 00:31:27.228 "config": [ 00:31:27.228 { 00:31:27.228 "method": "accel_set_options", 00:31:27.228 "params": { 00:31:27.228 "small_cache_size": 128, 00:31:27.228 "large_cache_size": 16, 00:31:27.228 "task_count": 2048, 00:31:27.228 "sequence_count": 2048, 00:31:27.228 "buf_count": 2048 00:31:27.228 } 00:31:27.228 } 00:31:27.228 ] 00:31:27.228 }, 00:31:27.228 { 00:31:27.228 "subsystem": "bdev", 00:31:27.228 "config": [ 00:31:27.228 { 00:31:27.228 "method": "bdev_set_options", 00:31:27.228 "params": { 00:31:27.228 "bdev_io_pool_size": 65535, 00:31:27.228 "bdev_io_cache_size": 256, 00:31:27.228 "bdev_auto_examine": true, 00:31:27.228 "iobuf_small_cache_size": 128, 00:31:27.228 "iobuf_large_cache_size": 16 00:31:27.228 } 00:31:27.228 }, 00:31:27.228 { 00:31:27.228 "method": "bdev_raid_set_options", 00:31:27.228 "params": { 00:31:27.228 "process_window_size_kb": 1024, 00:31:27.228 "process_max_bandwidth_mb_sec": 0 00:31:27.228 } 00:31:27.228 }, 00:31:27.228 { 00:31:27.228 "method": "bdev_iscsi_set_options", 00:31:27.228 "params": { 00:31:27.228 "timeout_sec": 30 00:31:27.228 } 00:31:27.228 }, 00:31:27.228 { 00:31:27.228 "method": "bdev_nvme_set_options", 00:31:27.228 "params": { 00:31:27.228 "action_on_timeout": "none", 00:31:27.228 "timeout_us": 0, 00:31:27.228 "timeout_admin_us": 0, 00:31:27.228 "keep_alive_timeout_ms": 10000, 00:31:27.228 "arbitration_burst": 0, 00:31:27.228 "low_priority_weight": 0, 00:31:27.228 "medium_priority_weight": 0, 00:31:27.228 "high_priority_weight": 0, 00:31:27.228 "nvme_adminq_poll_period_us": 10000, 00:31:27.228 "nvme_ioq_poll_period_us": 0, 00:31:27.228 "io_queue_requests": 512, 00:31:27.228 "delay_cmd_submit": true, 00:31:27.228 "transport_retry_count": 4, 00:31:27.228 "bdev_retry_count": 3, 00:31:27.228 "transport_ack_timeout": 0, 00:31:27.228 "ctrlr_loss_timeout_sec": 0, 00:31:27.228 "reconnect_delay_sec": 0, 00:31:27.228 "fast_io_fail_timeout_sec": 0, 00:31:27.228 "disable_auto_failback": false, 00:31:27.228 "generate_uuids": false, 00:31:27.228 "transport_tos": 0, 00:31:27.228 "nvme_error_stat": false, 00:31:27.228 "rdma_srq_size": 0, 00:31:27.228 "io_path_stat": false, 00:31:27.228 "allow_accel_sequence": false, 00:31:27.228 "rdma_max_cq_size": 0, 00:31:27.228 "rdma_cm_event_timeout_ms": 0, 00:31:27.228 "dhchap_digests": [ 00:31:27.228 "sha256", 00:31:27.228 "sha384", 00:31:27.228 "sha512" 00:31:27.228 ], 00:31:27.228 "dhchap_dhgroups": [ 00:31:27.228 "null", 00:31:27.228 "ffdhe2048", 00:31:27.228 "ffdhe3072", 00:31:27.228 "ffdhe4096", 00:31:27.228 "ffdhe6144", 00:31:27.228 "ffdhe8192" 00:31:27.228 ] 00:31:27.228 } 00:31:27.228 }, 00:31:27.228 { 00:31:27.228 "method": "bdev_nvme_attach_controller", 00:31:27.228 "params": { 00:31:27.228 "name": "nvme0", 00:31:27.228 "trtype": "TCP", 00:31:27.228 "adrfam": "IPv4", 00:31:27.228 "traddr": "127.0.0.1", 00:31:27.228 "trsvcid": "4420", 00:31:27.228 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:27.228 "prchk_reftag": false, 00:31:27.228 "prchk_guard": false, 00:31:27.228 "ctrlr_loss_timeout_sec": 0, 00:31:27.228 "reconnect_delay_sec": 0, 00:31:27.228 "fast_io_fail_timeout_sec": 0, 00:31:27.228 "psk": "key0", 00:31:27.228 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:27.228 "hdgst": false, 00:31:27.228 "ddgst": false, 00:31:27.228 "multipath": "multipath" 00:31:27.228 } 00:31:27.228 }, 00:31:27.228 { 00:31:27.228 "method": "bdev_nvme_set_hotplug", 00:31:27.228 "params": { 00:31:27.228 "period_us": 100000, 00:31:27.228 "enable": false 00:31:27.228 } 00:31:27.228 }, 00:31:27.228 { 00:31:27.228 "method": "bdev_wait_for_examine" 00:31:27.228 } 00:31:27.228 ] 00:31:27.228 }, 00:31:27.228 { 00:31:27.228 "subsystem": "nbd", 00:31:27.228 "config": [] 00:31:27.228 } 00:31:27.228 ] 00:31:27.228 }' 00:31:27.228 19:33:36 keyring_file -- keyring/file.sh@115 -- # killprocess 85433 00:31:27.228 19:33:36 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 85433 ']' 00:31:27.228 19:33:36 keyring_file -- common/autotest_common.sh@954 -- # kill -0 85433 00:31:27.228 19:33:36 keyring_file -- common/autotest_common.sh@955 -- # uname 00:31:27.228 19:33:36 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:27.228 19:33:36 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85433 00:31:27.228 killing process with pid 85433 00:31:27.228 Received shutdown signal, test time was about 1.000000 seconds 00:31:27.228 00:31:27.228 Latency(us) 00:31:27.228 [2024-10-17T19:33:36.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:27.228 [2024-10-17T19:33:36.486Z] =================================================================================================================== 00:31:27.228 [2024-10-17T19:33:36.486Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:27.228 19:33:36 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:27.228 19:33:36 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:27.228 19:33:36 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85433' 00:31:27.228 19:33:36 keyring_file -- common/autotest_common.sh@969 -- # kill 85433 00:31:27.228 19:33:36 keyring_file -- common/autotest_common.sh@974 -- # wait 85433 00:31:27.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:27.228 19:33:36 keyring_file -- keyring/file.sh@118 -- # bperfpid=85688 00:31:27.228 19:33:36 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85688 /var/tmp/bperf.sock 00:31:27.228 19:33:36 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 85688 ']' 00:31:27.228 19:33:36 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:27.228 19:33:36 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:27.228 19:33:36 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:27.228 19:33:36 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:27.228 19:33:36 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:31:27.228 "subsystems": [ 00:31:27.229 { 00:31:27.229 "subsystem": "keyring", 00:31:27.229 "config": [ 00:31:27.229 { 00:31:27.229 "method": "keyring_file_add_key", 00:31:27.229 "params": { 00:31:27.229 "name": "key0", 00:31:27.229 "path": "/tmp/tmp.6T78oKH2Kt" 00:31:27.229 } 00:31:27.229 }, 00:31:27.229 { 00:31:27.229 "method": "keyring_file_add_key", 00:31:27.229 "params": { 00:31:27.229 "name": "key1", 00:31:27.229 "path": "/tmp/tmp.v1LovvYxMa" 00:31:27.229 } 00:31:27.229 } 00:31:27.229 ] 00:31:27.229 }, 00:31:27.229 { 00:31:27.229 "subsystem": "iobuf", 00:31:27.229 "config": [ 00:31:27.229 { 00:31:27.229 "method": "iobuf_set_options", 00:31:27.229 "params": { 00:31:27.229 "small_pool_count": 8192, 00:31:27.229 "large_pool_count": 1024, 00:31:27.229 "small_bufsize": 8192, 00:31:27.229 "large_bufsize": 135168 00:31:27.229 } 00:31:27.229 } 00:31:27.229 ] 00:31:27.229 }, 00:31:27.229 { 00:31:27.229 "subsystem": "sock", 00:31:27.229 "config": [ 00:31:27.229 { 00:31:27.229 "method": "sock_set_default_impl", 00:31:27.229 "params": { 00:31:27.229 "impl_name": "uring" 00:31:27.229 } 00:31:27.229 }, 00:31:27.229 { 00:31:27.229 "method": "sock_impl_set_options", 00:31:27.229 "params": { 00:31:27.229 "impl_name": "ssl", 00:31:27.229 "recv_buf_size": 4096, 00:31:27.229 "send_buf_size": 4096, 00:31:27.229 "enable_recv_pipe": true, 00:31:27.229 "enable_quickack": false, 00:31:27.229 "enable_placement_id": 0, 00:31:27.229 "enable_zerocopy_send_server": true, 00:31:27.229 "enable_zerocopy_send_client": false, 00:31:27.229 "zerocopy_threshold": 0, 00:31:27.229 "tls_version": 0, 00:31:27.229 "enable_ktls": false 00:31:27.229 } 00:31:27.229 }, 00:31:27.229 { 00:31:27.229 "method": "sock_impl_set_options", 00:31:27.229 "params": { 00:31:27.229 "impl_name": "posix", 00:31:27.229 "recv_buf_size": 2097152, 00:31:27.229 "send_buf_size": 2097152, 00:31:27.229 "enable_recv_pipe": true, 00:31:27.229 "enable_quickack": false, 00:31:27.229 "enable_placement_id": 0, 00:31:27.229 "enable_zerocopy_send_server": true, 00:31:27.229 "enable_zerocopy_send_client": false, 00:31:27.229 "zerocopy_threshold": 0, 00:31:27.229 "tls_version": 0, 00:31:27.229 "enable_ktls": false 00:31:27.229 } 00:31:27.229 }, 00:31:27.229 { 00:31:27.229 "method": "sock_impl_set_options", 00:31:27.229 "params": { 00:31:27.229 "impl_name": "uring", 00:31:27.229 "recv_buf_size": 2097152, 00:31:27.229 "send_buf_size": 2097152, 00:31:27.229 "enable_recv_pipe": true, 00:31:27.229 "enable_quickack": false, 00:31:27.229 "enable_placement_id": 0, 00:31:27.229 "enable_zerocopy_send_server": false, 00:31:27.229 "enable_zerocopy_send_client": false, 00:31:27.229 "zerocopy_threshold": 0, 00:31:27.229 "tls_version": 0, 00:31:27.229 "enable_ktls": false 00:31:27.229 } 00:31:27.229 } 00:31:27.229 ] 00:31:27.229 }, 00:31:27.229 { 00:31:27.229 "subsystem": "vmd", 00:31:27.229 "config": [] 00:31:27.229 }, 00:31:27.229 { 00:31:27.229 "subsystem": "accel", 00:31:27.229 "config": [ 00:31:27.229 { 00:31:27.229 "method": "accel_set_options", 00:31:27.229 "params": { 00:31:27.229 "small_cache_size": 128, 00:31:27.229 "large_cache_size": 16, 00:31:27.229 "task_count": 2048, 00:31:27.229 "sequence_count": 2048, 00:31:27.229 "buf_count": 2048 00:31:27.229 } 00:31:27.229 } 00:31:27.229 ] 00:31:27.229 }, 00:31:27.229 { 00:31:27.229 "subsystem": "bdev", 00:31:27.229 "config": [ 00:31:27.229 { 00:31:27.229 "method": "bdev_set_options", 00:31:27.229 "params": { 00:31:27.229 "bdev_io_pool_size": 65535, 00:31:27.229 "bdev_io_cache_size": 256, 00:31:27.229 "bdev_auto_examine": true, 00:31:27.229 "iobuf_small_cache_size": 128, 00:31:27.229 "iobuf_large_cache_size": 16 00:31:27.229 } 00:31:27.229 }, 00:31:27.229 { 00:31:27.229 "method": "bdev_raid_set_options", 00:31:27.229 "params": { 00:31:27.229 "process_window_size_kb": 1024, 00:31:27.229 "process_max_bandwidth_mb_sec": 0 00:31:27.229 } 00:31:27.229 }, 00:31:27.229 { 00:31:27.229 "method": "bdev_iscsi_set_options", 00:31:27.229 "params": { 00:31:27.229 "timeout_sec": 30 00:31:27.229 } 00:31:27.229 }, 00:31:27.229 { 00:31:27.229 "method": "bdev_nvme_set_options", 00:31:27.229 "params": { 00:31:27.229 "action_on_timeout": "none", 00:31:27.229 "timeout_us": 0, 00:31:27.229 "timeout_admin_us": 0, 00:31:27.229 "keep_alive_timeout_ms": 10000, 00:31:27.229 "arbitration_burst": 0, 00:31:27.229 "low_priority_weight": 0, 00:31:27.229 "medium_priority_weight": 0, 00:31:27.229 "high_priority_weight": 0, 00:31:27.229 "nvme_adminq_poll_period_us": 10000, 00:31:27.229 "nvme_ioq_poll_period_us": 0, 00:31:27.229 "io_queue_requests": 512, 00:31:27.229 "delay_cmd_submit": true, 00:31:27.229 "transport_retry_count": 4, 00:31:27.229 "bdev_retry_count": 3, 00:31:27.229 "transport_ack_timeout": 0, 00:31:27.229 "ctrlr_loss_timeout_sec": 0, 00:31:27.229 "reconnect_delay_sec": 0, 00:31:27.229 "fast_io_fail_timeout_sec": 0, 00:31:27.229 "disable_auto_failback": false, 00:31:27.229 "generate_uuids": false, 00:31:27.229 "transport_tos": 0, 00:31:27.229 "nvme_error_stat": false, 00:31:27.229 "rdma_srq_size": 0, 00:31:27.229 "io_path_stat": false, 00:31:27.229 "allow_accel_sequence": false, 00:31:27.229 "rdma_max_cq_size": 0, 00:31:27.229 "rdma_cm_event_timeout_ms": 0, 00:31:27.229 "dhchap_digests": [ 00:31:27.229 "sha256", 00:31:27.229 "sha384", 00:31:27.229 "sha512" 00:31:27.229 ], 00:31:27.229 "dhchap_dhgroups": [ 00:31:27.229 "null", 00:31:27.229 "ffdhe2048", 00:31:27.229 "ffdhe3072", 00:31:27.229 "ffdhe4096", 00:31:27.229 "ffdhe6144", 00:31:27.229 "ffdhe8192" 00:31:27.229 ] 00:31:27.229 } 00:31:27.229 }, 00:31:27.229 { 00:31:27.229 "method": "bdev_nvme_attach_controller", 00:31:27.229 "params": { 00:31:27.229 "name": "nvme0", 00:31:27.229 "trtype": "TCP", 00:31:27.229 "adrfam": "IPv4", 00:31:27.229 "traddr": "127.0.0.1", 00:31:27.229 "trsvcid": "4420", 00:31:27.229 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:27.229 "prchk_reftag": false, 00:31:27.229 "prchk_guard": false, 00:31:27.229 "ctrlr_loss_timeout_sec": 0, 00:31:27.229 "reconnect_delay_sec": 0, 00:31:27.229 "fast_io_fail_timeout_sec": 0, 00:31:27.229 "psk": "key0", 00:31:27.229 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:27.229 "hdgst": false, 00:31:27.229 "ddgst": false, 00:31:27.229 "multipath": "multipath" 00:31:27.229 } 00:31:27.229 }, 00:31:27.229 { 00:31:27.229 "method": "bdev_nvme_set_hotplug", 00:31:27.229 "params": { 00:31:27.229 "period_us": 100000, 00:31:27.229 "enable": false 00:31:27.229 } 00:31:27.229 }, 00:31:27.229 { 00:31:27.229 "method": "bdev_wait_for_examine" 00:31:27.229 } 00:31:27.229 ] 00:31:27.229 }, 00:31:27.229 { 00:31:27.229 "subsystem": "nbd", 00:31:27.229 "config": [] 00:31:27.229 } 00:31:27.229 ] 00:31:27.229 }' 00:31:27.229 19:33:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:27.229 19:33:36 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:31:27.488 [2024-10-17 19:33:36.502250] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:31:27.488 [2024-10-17 19:33:36.502359] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85688 ] 00:31:27.488 [2024-10-17 19:33:36.636198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.488 [2024-10-17 19:33:36.695397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:27.746 [2024-10-17 19:33:36.830358] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:27.746 [2024-10-17 19:33:36.887244] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:28.679 19:33:37 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:28.679 19:33:37 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:31:28.679 19:33:37 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:31:28.679 19:33:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:28.679 19:33:37 keyring_file -- keyring/file.sh@121 -- # jq length 00:31:28.679 19:33:37 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:31:28.679 19:33:37 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:31:28.937 19:33:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:28.937 19:33:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:28.937 19:33:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:28.937 19:33:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:28.937 19:33:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:29.194 19:33:38 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:31:29.194 19:33:38 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:31:29.194 19:33:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:29.194 19:33:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:29.194 19:33:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:29.194 19:33:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:29.194 19:33:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:29.772 19:33:38 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:31:29.772 19:33:38 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:31:29.772 19:33:38 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:31:29.772 19:33:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:31:30.029 19:33:39 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:31:30.029 19:33:39 keyring_file -- keyring/file.sh@1 -- # cleanup 00:31:30.029 19:33:39 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.6T78oKH2Kt /tmp/tmp.v1LovvYxMa 00:31:30.029 19:33:39 keyring_file -- keyring/file.sh@20 -- # killprocess 85688 00:31:30.029 19:33:39 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 85688 ']' 00:31:30.030 19:33:39 keyring_file -- common/autotest_common.sh@954 -- # kill -0 85688 00:31:30.030 19:33:39 keyring_file -- common/autotest_common.sh@955 -- # uname 00:31:30.030 19:33:39 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:30.030 19:33:39 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85688 00:31:30.030 killing process with pid 85688 00:31:30.030 Received shutdown signal, test time was about 1.000000 seconds 00:31:30.030 00:31:30.030 Latency(us) 00:31:30.030 [2024-10-17T19:33:39.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:30.030 [2024-10-17T19:33:39.288Z] =================================================================================================================== 00:31:30.030 [2024-10-17T19:33:39.288Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:30.030 19:33:39 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:30.030 19:33:39 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:30.030 19:33:39 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85688' 00:31:30.030 19:33:39 keyring_file -- common/autotest_common.sh@969 -- # kill 85688 00:31:30.030 19:33:39 keyring_file -- common/autotest_common.sh@974 -- # wait 85688 00:31:30.287 19:33:39 keyring_file -- keyring/file.sh@21 -- # killprocess 85422 00:31:30.287 19:33:39 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 85422 ']' 00:31:30.287 19:33:39 keyring_file -- common/autotest_common.sh@954 -- # kill -0 85422 00:31:30.287 19:33:39 keyring_file -- common/autotest_common.sh@955 -- # uname 00:31:30.287 19:33:39 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:30.287 19:33:39 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85422 00:31:30.287 killing process with pid 85422 00:31:30.287 19:33:39 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:30.287 19:33:39 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:30.287 19:33:39 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85422' 00:31:30.287 19:33:39 keyring_file -- common/autotest_common.sh@969 -- # kill 85422 00:31:30.287 19:33:39 keyring_file -- common/autotest_common.sh@974 -- # wait 85422 00:31:30.851 00:31:30.851 real 0m17.247s 00:31:30.851 user 0m44.223s 00:31:30.851 sys 0m3.400s 00:31:30.851 19:33:39 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:30.851 19:33:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:30.851 ************************************ 00:31:30.851 END TEST keyring_file 00:31:30.851 ************************************ 00:31:30.851 19:33:39 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:31:30.851 19:33:39 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:31:30.851 19:33:39 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:30.851 19:33:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:30.851 19:33:39 -- common/autotest_common.sh@10 -- # set +x 00:31:30.851 ************************************ 00:31:30.851 START TEST keyring_linux 00:31:30.851 ************************************ 00:31:30.851 19:33:39 keyring_linux -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:31:30.851 Joined session keyring: 400513683 00:31:30.851 * Looking for test storage... 00:31:30.851 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:31:30.851 19:33:40 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:30.851 19:33:40 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:31:30.851 19:33:40 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:30.851 19:33:40 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:30.851 19:33:40 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:30.851 19:33:40 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:30.851 19:33:40 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:30.851 19:33:40 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:31:30.851 19:33:40 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:31:30.851 19:33:40 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:31:30.851 19:33:40 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:31:30.851 19:33:40 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:31:30.851 19:33:40 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:31:30.851 19:33:40 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:31:30.852 19:33:40 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:30.852 19:33:40 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:31:30.852 19:33:40 keyring_linux -- scripts/common.sh@345 -- # : 1 00:31:30.852 19:33:40 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:30.852 19:33:40 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:30.852 19:33:40 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:31:30.852 19:33:40 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:31:30.852 19:33:40 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:30.852 19:33:40 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:31:30.852 19:33:40 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:31:30.852 19:33:40 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:31:30.852 19:33:40 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:31:30.852 19:33:40 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:30.852 19:33:40 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:31:30.852 19:33:40 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:31:30.852 19:33:40 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:30.852 19:33:40 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:30.852 19:33:40 keyring_linux -- scripts/common.sh@368 -- # return 0 00:31:30.852 19:33:40 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:30.852 19:33:40 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:30.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.852 --rc genhtml_branch_coverage=1 00:31:30.852 --rc genhtml_function_coverage=1 00:31:30.852 --rc genhtml_legend=1 00:31:30.852 --rc geninfo_all_blocks=1 00:31:30.852 --rc geninfo_unexecuted_blocks=1 00:31:30.852 00:31:30.852 ' 00:31:30.852 19:33:40 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:30.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.852 --rc genhtml_branch_coverage=1 00:31:30.852 --rc genhtml_function_coverage=1 00:31:30.852 --rc genhtml_legend=1 00:31:30.852 --rc geninfo_all_blocks=1 00:31:30.852 --rc geninfo_unexecuted_blocks=1 00:31:30.852 00:31:30.852 ' 00:31:30.852 19:33:40 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:30.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.852 --rc genhtml_branch_coverage=1 00:31:30.852 --rc genhtml_function_coverage=1 00:31:30.852 --rc genhtml_legend=1 00:31:30.852 --rc geninfo_all_blocks=1 00:31:30.852 --rc geninfo_unexecuted_blocks=1 00:31:30.852 00:31:30.852 ' 00:31:30.852 19:33:40 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:30.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.852 --rc genhtml_branch_coverage=1 00:31:30.852 --rc genhtml_function_coverage=1 00:31:30.852 --rc genhtml_legend=1 00:31:30.852 --rc geninfo_all_blocks=1 00:31:30.852 --rc geninfo_unexecuted_blocks=1 00:31:30.852 00:31:30.852 ' 00:31:30.852 19:33:40 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:31:30.852 19:33:40 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:30.852 19:33:40 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:31:30.852 19:33:40 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:30.852 19:33:40 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:30.852 19:33:40 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:30.852 19:33:40 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:30.852 19:33:40 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:30.852 19:33:40 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:30.852 19:33:40 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:30.852 19:33:40 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:30.852 19:33:40 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:30.852 19:33:40 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:31.110 19:33:40 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:31:31.110 19:33:40 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4c864e-bb30-4900-8fc1-989c4e76fc1b 00:31:31.110 19:33:40 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:31.110 19:33:40 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:31.110 19:33:40 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:31.110 19:33:40 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:31.110 19:33:40 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:31.110 19:33:40 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:31:31.110 19:33:40 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:31.110 19:33:40 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:31.110 19:33:40 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:31.110 19:33:40 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.110 19:33:40 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.110 19:33:40 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.110 19:33:40 keyring_linux -- paths/export.sh@5 -- # export PATH 00:31:31.110 19:33:40 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.110 19:33:40 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:31:31.110 19:33:40 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:31.110 19:33:40 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:31.110 19:33:40 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:31.110 19:33:40 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:31.110 19:33:40 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:31.110 19:33:40 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:31.110 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:31.110 19:33:40 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:31.110 19:33:40 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:31.110 19:33:40 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:31.110 19:33:40 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:31.110 19:33:40 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:31.110 19:33:40 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:31.110 19:33:40 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:31:31.110 19:33:40 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:31:31.110 19:33:40 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:31:31.110 19:33:40 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:31:31.110 19:33:40 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:31.110 19:33:40 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:31:31.110 19:33:40 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:31.110 19:33:40 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:31.110 19:33:40 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:31:31.110 19:33:40 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:31.110 19:33:40 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:31.110 19:33:40 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:31:31.110 19:33:40 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:31:31.110 19:33:40 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:31:31.110 19:33:40 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:31:31.110 19:33:40 keyring_linux -- nvmf/common.sh@731 -- # python - 00:31:31.110 19:33:40 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:31:31.110 /tmp/:spdk-test:key0 00:31:31.110 19:33:40 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:31:31.110 19:33:40 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:31:31.110 19:33:40 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:31.110 19:33:40 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:31:31.110 19:33:40 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:31.110 19:33:40 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:31.110 19:33:40 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:31:31.110 19:33:40 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:31.110 19:33:40 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:31.110 19:33:40 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:31:31.110 19:33:40 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:31:31.110 19:33:40 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:31:31.110 19:33:40 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:31:31.110 19:33:40 keyring_linux -- nvmf/common.sh@731 -- # python - 00:31:31.110 19:33:40 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:31:31.110 /tmp/:spdk-test:key1 00:31:31.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:31.110 19:33:40 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:31:31.110 19:33:40 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85821 00:31:31.110 19:33:40 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:31.110 19:33:40 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85821 00:31:31.110 19:33:40 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 85821 ']' 00:31:31.110 19:33:40 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:31.110 19:33:40 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:31.110 19:33:40 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:31.110 19:33:40 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:31.110 19:33:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:31.110 [2024-10-17 19:33:40.305295] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:31:31.110 [2024-10-17 19:33:40.305645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85821 ] 00:31:31.369 [2024-10-17 19:33:40.444188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:31.369 [2024-10-17 19:33:40.522889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:31.626 [2024-10-17 19:33:40.628864] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:31.884 19:33:40 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:31.884 19:33:40 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:31:31.884 19:33:40 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:31:31.884 19:33:40 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.884 19:33:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:31.884 [2024-10-17 19:33:40.898716] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:31.884 null0 00:31:31.884 [2024-10-17 19:33:40.930679] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:31.884 [2024-10-17 19:33:40.930962] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:31.884 19:33:40 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.884 19:33:40 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:31:31.884 931745959 00:31:31.884 19:33:40 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:31:31.884 275503927 00:31:31.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:31.884 19:33:40 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85836 00:31:31.884 19:33:40 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:31:31.884 19:33:40 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85836 /var/tmp/bperf.sock 00:31:31.884 19:33:40 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 85836 ']' 00:31:31.884 19:33:40 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:31.884 19:33:40 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:31.884 19:33:40 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:31.884 19:33:40 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:31.884 19:33:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:31.884 [2024-10-17 19:33:41.011658] Starting SPDK v25.01-pre git sha1 006f950ff / DPDK 24.03.0 initialization... 00:31:31.884 [2024-10-17 19:33:41.012010] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85836 ] 00:31:32.142 [2024-10-17 19:33:41.150236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.142 [2024-10-17 19:33:41.220243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:32.142 19:33:41 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:32.142 19:33:41 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:31:32.142 19:33:41 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:31:32.142 19:33:41 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:31:32.402 19:33:41 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:31:32.402 19:33:41 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:32.660 [2024-10-17 19:33:41.900340] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:32.918 19:33:41 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:32.918 19:33:41 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:33.176 [2024-10-17 19:33:42.205954] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:33.176 nvme0n1 00:31:33.176 19:33:42 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:31:33.176 19:33:42 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:31:33.176 19:33:42 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:33.176 19:33:42 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:33.176 19:33:42 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:33.176 19:33:42 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:33.435 19:33:42 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:31:33.435 19:33:42 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:33.435 19:33:42 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:31:33.436 19:33:42 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:31:33.436 19:33:42 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:31:33.436 19:33:42 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:33.436 19:33:42 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:33.693 19:33:42 keyring_linux -- keyring/linux.sh@25 -- # sn=931745959 00:31:33.693 19:33:42 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:31:33.693 19:33:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:33.693 19:33:42 keyring_linux -- keyring/linux.sh@26 -- # [[ 931745959 == \9\3\1\7\4\5\9\5\9 ]] 00:31:33.693 19:33:42 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 931745959 00:31:33.693 19:33:42 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:31:33.693 19:33:42 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:33.952 Running I/O for 1 seconds... 00:31:34.886 9011.00 IOPS, 35.20 MiB/s 00:31:34.886 Latency(us) 00:31:34.886 [2024-10-17T19:33:44.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:34.886 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:34.886 nvme0n1 : 1.01 9017.97 35.23 0.00 0.00 14092.53 7417.48 19779.96 00:31:34.886 [2024-10-17T19:33:44.144Z] =================================================================================================================== 00:31:34.886 [2024-10-17T19:33:44.144Z] Total : 9017.97 35.23 0.00 0.00 14092.53 7417.48 19779.96 00:31:34.886 { 00:31:34.886 "results": [ 00:31:34.886 { 00:31:34.886 "job": "nvme0n1", 00:31:34.886 "core_mask": "0x2", 00:31:34.886 "workload": "randread", 00:31:34.886 "status": "finished", 00:31:34.886 "queue_depth": 128, 00:31:34.886 "io_size": 4096, 00:31:34.886 "runtime": 1.013421, 00:31:34.886 "iops": 9017.96982695247, 00:31:34.886 "mibps": 35.22644463653309, 00:31:34.886 "io_failed": 0, 00:31:34.886 "io_timeout": 0, 00:31:34.886 "avg_latency_us": 14092.533098309941, 00:31:34.886 "min_latency_us": 7417.483636363636, 00:31:34.886 "max_latency_us": 19779.956363636364 00:31:34.886 } 00:31:34.886 ], 00:31:34.886 "core_count": 1 00:31:34.886 } 00:31:34.886 19:33:44 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:34.886 19:33:44 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:35.144 19:33:44 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:31:35.144 19:33:44 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:31:35.144 19:33:44 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:35.144 19:33:44 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:35.144 19:33:44 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:35.144 19:33:44 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:35.401 19:33:44 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:31:35.401 19:33:44 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:35.401 19:33:44 keyring_linux -- keyring/linux.sh@23 -- # return 00:31:35.401 19:33:44 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:35.401 19:33:44 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:31:35.401 19:33:44 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:35.401 19:33:44 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:31:35.401 19:33:44 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:35.401 19:33:44 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:31:35.401 19:33:44 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:35.402 19:33:44 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:35.402 19:33:44 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:35.659 [2024-10-17 19:33:44.898441] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:35.659 [2024-10-17 19:33:44.898962] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x698230 (107): Transport endpoint is not connected 00:31:35.659 [2024-10-17 19:33:44.899942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x698230 (9): Bad file descriptor 00:31:35.659 [2024-10-17 19:33:44.900938] nvme_ctrlr.c:4250:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:35.659 [2024-10-17 19:33:44.900961] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:35.659 [2024-10-17 19:33:44.900972] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:31:35.659 [2024-10-17 19:33:44.900985] nvme_ctrlr.c:1152:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:35.659 request: 00:31:35.659 { 00:31:35.659 "name": "nvme0", 00:31:35.659 "trtype": "tcp", 00:31:35.659 "traddr": "127.0.0.1", 00:31:35.659 "adrfam": "ipv4", 00:31:35.659 "trsvcid": "4420", 00:31:35.659 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:35.659 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:35.659 "prchk_reftag": false, 00:31:35.659 "prchk_guard": false, 00:31:35.659 "hdgst": false, 00:31:35.659 "ddgst": false, 00:31:35.659 "psk": ":spdk-test:key1", 00:31:35.659 "allow_unrecognized_csi": false, 00:31:35.659 "method": "bdev_nvme_attach_controller", 00:31:35.659 "req_id": 1 00:31:35.659 } 00:31:35.659 Got JSON-RPC error response 00:31:35.659 response: 00:31:35.659 { 00:31:35.659 "code": -5, 00:31:35.659 "message": "Input/output error" 00:31:35.659 } 00:31:35.917 19:33:44 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:31:35.917 19:33:44 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:35.917 19:33:44 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:35.917 19:33:44 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:35.917 19:33:44 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:31:35.917 19:33:44 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:35.917 19:33:44 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:31:35.917 19:33:44 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:31:35.917 19:33:44 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:31:35.917 19:33:44 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:35.917 19:33:44 keyring_linux -- keyring/linux.sh@33 -- # sn=931745959 00:31:35.917 19:33:44 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 931745959 00:31:35.917 1 links removed 00:31:35.917 19:33:44 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:35.917 19:33:44 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:31:35.917 19:33:44 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:31:35.917 19:33:44 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:31:35.917 19:33:44 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:31:35.917 19:33:44 keyring_linux -- keyring/linux.sh@33 -- # sn=275503927 00:31:35.917 19:33:44 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 275503927 00:31:35.917 1 links removed 00:31:35.917 19:33:44 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85836 00:31:35.917 19:33:44 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 85836 ']' 00:31:35.917 19:33:44 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 85836 00:31:35.917 19:33:44 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:31:35.917 19:33:44 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:35.917 19:33:44 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85836 00:31:35.917 killing process with pid 85836 00:31:35.917 Received shutdown signal, test time was about 1.000000 seconds 00:31:35.917 00:31:35.917 Latency(us) 00:31:35.917 [2024-10-17T19:33:45.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:35.917 [2024-10-17T19:33:45.175Z] =================================================================================================================== 00:31:35.917 [2024-10-17T19:33:45.175Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:35.917 19:33:44 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:35.917 19:33:44 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:35.917 19:33:44 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85836' 00:31:35.917 19:33:44 keyring_linux -- common/autotest_common.sh@969 -- # kill 85836 00:31:35.917 19:33:44 keyring_linux -- common/autotest_common.sh@974 -- # wait 85836 00:31:35.917 19:33:45 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85821 00:31:35.917 19:33:45 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 85821 ']' 00:31:35.917 19:33:45 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 85821 00:31:35.917 19:33:45 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:31:36.175 19:33:45 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:36.175 19:33:45 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85821 00:31:36.175 killing process with pid 85821 00:31:36.175 19:33:45 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:36.175 19:33:45 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:36.175 19:33:45 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85821' 00:31:36.175 19:33:45 keyring_linux -- common/autotest_common.sh@969 -- # kill 85821 00:31:36.175 19:33:45 keyring_linux -- common/autotest_common.sh@974 -- # wait 85821 00:31:36.740 00:31:36.740 real 0m5.852s 00:31:36.740 user 0m11.194s 00:31:36.740 sys 0m1.700s 00:31:36.740 19:33:45 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:36.740 19:33:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:36.740 ************************************ 00:31:36.740 END TEST keyring_linux 00:31:36.740 ************************************ 00:31:36.740 19:33:45 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:31:36.740 19:33:45 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:31:36.740 19:33:45 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:31:36.740 19:33:45 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:31:36.740 19:33:45 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:31:36.740 19:33:45 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:31:36.740 19:33:45 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:31:36.740 19:33:45 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:31:36.740 19:33:45 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:36.740 19:33:45 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:31:36.740 19:33:45 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:36.740 19:33:45 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:31:36.740 19:33:45 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:36.740 19:33:45 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:36.740 19:33:45 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:31:36.740 19:33:45 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:31:36.740 19:33:45 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:31:36.740 19:33:45 -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:36.740 19:33:45 -- common/autotest_common.sh@10 -- # set +x 00:31:36.740 19:33:45 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:31:36.740 19:33:45 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:31:36.740 19:33:45 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:31:36.740 19:33:45 -- common/autotest_common.sh@10 -- # set +x 00:31:38.641 INFO: APP EXITING 00:31:38.641 INFO: killing all VMs 00:31:38.641 INFO: killing vhost app 00:31:38.641 INFO: EXIT DONE 00:31:39.234 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:39.234 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:39.234 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:40.168 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:40.168 Cleaning 00:31:40.168 Removing: /var/run/dpdk/spdk0/config 00:31:40.168 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:40.168 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:40.168 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:40.168 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:40.168 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:40.168 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:40.168 Removing: /var/run/dpdk/spdk1/config 00:31:40.168 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:40.168 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:40.168 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:40.168 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:40.168 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:40.168 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:40.168 Removing: /var/run/dpdk/spdk2/config 00:31:40.168 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:31:40.168 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:31:40.168 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:31:40.168 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:31:40.168 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:31:40.168 Removing: /var/run/dpdk/spdk2/hugepage_info 00:31:40.168 Removing: /var/run/dpdk/spdk3/config 00:31:40.168 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:31:40.168 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:31:40.168 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:31:40.168 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:31:40.168 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:31:40.168 Removing: /var/run/dpdk/spdk3/hugepage_info 00:31:40.168 Removing: /var/run/dpdk/spdk4/config 00:31:40.168 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:31:40.168 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:31:40.168 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:31:40.168 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:31:40.168 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:31:40.168 Removing: /var/run/dpdk/spdk4/hugepage_info 00:31:40.168 Removing: /dev/shm/nvmf_trace.0 00:31:40.168 Removing: /dev/shm/spdk_tgt_trace.pid56857 00:31:40.168 Removing: /var/run/dpdk/spdk0 00:31:40.168 Removing: /var/run/dpdk/spdk1 00:31:40.168 Removing: /var/run/dpdk/spdk2 00:31:40.168 Removing: /var/run/dpdk/spdk3 00:31:40.168 Removing: /var/run/dpdk/spdk4 00:31:40.168 Removing: /var/run/dpdk/spdk_pid56704 00:31:40.168 Removing: /var/run/dpdk/spdk_pid56857 00:31:40.168 Removing: /var/run/dpdk/spdk_pid57056 00:31:40.168 Removing: /var/run/dpdk/spdk_pid57142 00:31:40.168 Removing: /var/run/dpdk/spdk_pid57170 00:31:40.168 Removing: /var/run/dpdk/spdk_pid57279 00:31:40.168 Removing: /var/run/dpdk/spdk_pid57297 00:31:40.168 Removing: /var/run/dpdk/spdk_pid57437 00:31:40.168 Removing: /var/run/dpdk/spdk_pid57638 00:31:40.168 Removing: /var/run/dpdk/spdk_pid57792 00:31:40.168 Removing: /var/run/dpdk/spdk_pid57864 00:31:40.168 Removing: /var/run/dpdk/spdk_pid57941 00:31:40.168 Removing: /var/run/dpdk/spdk_pid58032 00:31:40.168 Removing: /var/run/dpdk/spdk_pid58110 00:31:40.168 Removing: /var/run/dpdk/spdk_pid58143 00:31:40.168 Removing: /var/run/dpdk/spdk_pid58178 00:31:40.168 Removing: /var/run/dpdk/spdk_pid58248 00:31:40.168 Removing: /var/run/dpdk/spdk_pid58353 00:31:40.168 Removing: /var/run/dpdk/spdk_pid58803 00:31:40.168 Removing: /var/run/dpdk/spdk_pid58847 00:31:40.168 Removing: /var/run/dpdk/spdk_pid58896 00:31:40.168 Removing: /var/run/dpdk/spdk_pid58905 00:31:40.168 Removing: /var/run/dpdk/spdk_pid58977 00:31:40.168 Removing: /var/run/dpdk/spdk_pid58986 00:31:40.168 Removing: /var/run/dpdk/spdk_pid59058 00:31:40.168 Removing: /var/run/dpdk/spdk_pid59074 00:31:40.168 Removing: /var/run/dpdk/spdk_pid59125 00:31:40.168 Removing: /var/run/dpdk/spdk_pid59143 00:31:40.168 Removing: /var/run/dpdk/spdk_pid59189 00:31:40.168 Removing: /var/run/dpdk/spdk_pid59194 00:31:40.168 Removing: /var/run/dpdk/spdk_pid59333 00:31:40.168 Removing: /var/run/dpdk/spdk_pid59368 00:31:40.168 Removing: /var/run/dpdk/spdk_pid59451 00:31:40.168 Removing: /var/run/dpdk/spdk_pid59785 00:31:40.168 Removing: /var/run/dpdk/spdk_pid59808 00:31:40.168 Removing: /var/run/dpdk/spdk_pid59839 00:31:40.168 Removing: /var/run/dpdk/spdk_pid59853 00:31:40.168 Removing: /var/run/dpdk/spdk_pid59868 00:31:40.168 Removing: /var/run/dpdk/spdk_pid59889 00:31:40.168 Removing: /var/run/dpdk/spdk_pid59908 00:31:40.168 Removing: /var/run/dpdk/spdk_pid59928 00:31:40.168 Removing: /var/run/dpdk/spdk_pid59948 00:31:40.168 Removing: /var/run/dpdk/spdk_pid59956 00:31:40.168 Removing: /var/run/dpdk/spdk_pid59977 00:31:40.168 Removing: /var/run/dpdk/spdk_pid59996 00:31:40.168 Removing: /var/run/dpdk/spdk_pid60015 00:31:40.168 Removing: /var/run/dpdk/spdk_pid60025 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60050 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60063 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60086 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60108 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60122 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60137 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60173 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60186 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60216 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60288 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60317 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60326 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60355 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60364 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60376 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60415 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60433 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60462 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60471 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60481 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60490 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60500 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60509 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60519 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60534 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60562 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60589 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60598 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60627 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60636 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60644 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60690 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60700 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60728 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60735 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60743 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60752 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60758 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60771 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60773 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60786 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60868 00:31:40.426 Removing: /var/run/dpdk/spdk_pid60910 00:31:40.426 Removing: /var/run/dpdk/spdk_pid61028 00:31:40.426 Removing: /var/run/dpdk/spdk_pid61067 00:31:40.426 Removing: /var/run/dpdk/spdk_pid61112 00:31:40.426 Removing: /var/run/dpdk/spdk_pid61131 00:31:40.426 Removing: /var/run/dpdk/spdk_pid61143 00:31:40.426 Removing: /var/run/dpdk/spdk_pid61163 00:31:40.426 Removing: /var/run/dpdk/spdk_pid61200 00:31:40.426 Removing: /var/run/dpdk/spdk_pid61216 00:31:40.426 Removing: /var/run/dpdk/spdk_pid61294 00:31:40.426 Removing: /var/run/dpdk/spdk_pid61315 00:31:40.426 Removing: /var/run/dpdk/spdk_pid61365 00:31:40.426 Removing: /var/run/dpdk/spdk_pid61436 00:31:40.426 Removing: /var/run/dpdk/spdk_pid61502 00:31:40.426 Removing: /var/run/dpdk/spdk_pid61525 00:31:40.426 Removing: /var/run/dpdk/spdk_pid61625 00:31:40.426 Removing: /var/run/dpdk/spdk_pid61673 00:31:40.426 Removing: /var/run/dpdk/spdk_pid61700 00:31:40.426 Removing: /var/run/dpdk/spdk_pid61932 00:31:40.426 Removing: /var/run/dpdk/spdk_pid62035 00:31:40.426 Removing: /var/run/dpdk/spdk_pid62058 00:31:40.426 Removing: /var/run/dpdk/spdk_pid62093 00:31:40.426 Removing: /var/run/dpdk/spdk_pid62121 00:31:40.426 Removing: /var/run/dpdk/spdk_pid62160 00:31:40.426 Removing: /var/run/dpdk/spdk_pid62194 00:31:40.426 Removing: /var/run/dpdk/spdk_pid62225 00:31:40.426 Removing: /var/run/dpdk/spdk_pid62619 00:31:40.426 Removing: /var/run/dpdk/spdk_pid62659 00:31:40.426 Removing: /var/run/dpdk/spdk_pid62997 00:31:40.426 Removing: /var/run/dpdk/spdk_pid63466 00:31:40.426 Removing: /var/run/dpdk/spdk_pid63734 00:31:40.426 Removing: /var/run/dpdk/spdk_pid64610 00:31:40.426 Removing: /var/run/dpdk/spdk_pid65536 00:31:40.426 Removing: /var/run/dpdk/spdk_pid65653 00:31:40.426 Removing: /var/run/dpdk/spdk_pid65721 00:31:40.426 Removing: /var/run/dpdk/spdk_pid67136 00:31:40.426 Removing: /var/run/dpdk/spdk_pid67455 00:31:40.426 Removing: /var/run/dpdk/spdk_pid71329 00:31:40.426 Removing: /var/run/dpdk/spdk_pid71698 00:31:40.426 Removing: /var/run/dpdk/spdk_pid71803 00:31:40.426 Removing: /var/run/dpdk/spdk_pid71940 00:31:40.426 Removing: /var/run/dpdk/spdk_pid71977 00:31:40.426 Removing: /var/run/dpdk/spdk_pid72005 00:31:40.426 Removing: /var/run/dpdk/spdk_pid72034 00:31:40.426 Removing: /var/run/dpdk/spdk_pid72124 00:31:40.426 Removing: /var/run/dpdk/spdk_pid72261 00:31:40.426 Removing: /var/run/dpdk/spdk_pid72421 00:31:40.426 Removing: /var/run/dpdk/spdk_pid72508 00:31:40.684 Removing: /var/run/dpdk/spdk_pid72696 00:31:40.684 Removing: /var/run/dpdk/spdk_pid72764 00:31:40.684 Removing: /var/run/dpdk/spdk_pid72855 00:31:40.684 Removing: /var/run/dpdk/spdk_pid73206 00:31:40.684 Removing: /var/run/dpdk/spdk_pid73647 00:31:40.684 Removing: /var/run/dpdk/spdk_pid73648 00:31:40.684 Removing: /var/run/dpdk/spdk_pid73649 00:31:40.684 Removing: /var/run/dpdk/spdk_pid73903 00:31:40.684 Removing: /var/run/dpdk/spdk_pid74227 00:31:40.684 Removing: /var/run/dpdk/spdk_pid74230 00:31:40.684 Removing: /var/run/dpdk/spdk_pid74553 00:31:40.684 Removing: /var/run/dpdk/spdk_pid74567 00:31:40.684 Removing: /var/run/dpdk/spdk_pid74581 00:31:40.684 Removing: /var/run/dpdk/spdk_pid74612 00:31:40.684 Removing: /var/run/dpdk/spdk_pid74621 00:31:40.684 Removing: /var/run/dpdk/spdk_pid74982 00:31:40.684 Removing: /var/run/dpdk/spdk_pid75026 00:31:40.684 Removing: /var/run/dpdk/spdk_pid75353 00:31:40.684 Removing: /var/run/dpdk/spdk_pid75560 00:31:40.684 Removing: /var/run/dpdk/spdk_pid75993 00:31:40.684 Removing: /var/run/dpdk/spdk_pid76565 00:31:40.684 Removing: /var/run/dpdk/spdk_pid77474 00:31:40.684 Removing: /var/run/dpdk/spdk_pid78102 00:31:40.684 Removing: /var/run/dpdk/spdk_pid78108 00:31:40.684 Removing: /var/run/dpdk/spdk_pid80149 00:31:40.684 Removing: /var/run/dpdk/spdk_pid80214 00:31:40.684 Removing: /var/run/dpdk/spdk_pid80271 00:31:40.684 Removing: /var/run/dpdk/spdk_pid80337 00:31:40.684 Removing: /var/run/dpdk/spdk_pid80449 00:31:40.684 Removing: /var/run/dpdk/spdk_pid80502 00:31:40.684 Removing: /var/run/dpdk/spdk_pid80556 00:31:40.684 Removing: /var/run/dpdk/spdk_pid80603 00:31:40.684 Removing: /var/run/dpdk/spdk_pid80964 00:31:40.684 Removing: /var/run/dpdk/spdk_pid82179 00:31:40.684 Removing: /var/run/dpdk/spdk_pid82318 00:31:40.684 Removing: /var/run/dpdk/spdk_pid82549 00:31:40.684 Removing: /var/run/dpdk/spdk_pid83149 00:31:40.684 Removing: /var/run/dpdk/spdk_pid83309 00:31:40.684 Removing: /var/run/dpdk/spdk_pid83462 00:31:40.684 Removing: /var/run/dpdk/spdk_pid83559 00:31:40.684 Removing: /var/run/dpdk/spdk_pid83738 00:31:40.684 Removing: /var/run/dpdk/spdk_pid83847 00:31:40.684 Removing: /var/run/dpdk/spdk_pid84558 00:31:40.684 Removing: /var/run/dpdk/spdk_pid84593 00:31:40.684 Removing: /var/run/dpdk/spdk_pid84629 00:31:40.684 Removing: /var/run/dpdk/spdk_pid84884 00:31:40.684 Removing: /var/run/dpdk/spdk_pid84919 00:31:40.684 Removing: /var/run/dpdk/spdk_pid84949 00:31:40.684 Removing: /var/run/dpdk/spdk_pid85422 00:31:40.684 Removing: /var/run/dpdk/spdk_pid85433 00:31:40.684 Removing: /var/run/dpdk/spdk_pid85688 00:31:40.684 Removing: /var/run/dpdk/spdk_pid85821 00:31:40.684 Removing: /var/run/dpdk/spdk_pid85836 00:31:40.684 Clean 00:31:40.684 19:33:49 -- common/autotest_common.sh@1451 -- # return 0 00:31:40.684 19:33:49 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:31:40.684 19:33:49 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:40.684 19:33:49 -- common/autotest_common.sh@10 -- # set +x 00:31:40.684 19:33:49 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:31:40.684 19:33:49 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:40.684 19:33:49 -- common/autotest_common.sh@10 -- # set +x 00:31:40.941 19:33:49 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:40.941 19:33:49 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:40.941 19:33:49 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:40.941 19:33:49 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:31:40.941 19:33:49 -- spdk/autotest.sh@394 -- # hostname 00:31:40.941 19:33:49 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:40.941 geninfo: WARNING: invalid characters removed from testname! 00:32:13.002 19:34:19 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:14.902 19:34:23 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:18.196 19:34:27 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:20.747 19:34:29 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:24.032 19:34:32 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:26.577 19:34:35 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:29.859 19:34:38 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:29.859 19:34:38 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:32:29.859 19:34:38 -- common/autotest_common.sh@1691 -- $ lcov --version 00:32:29.859 19:34:38 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:32:29.859 19:34:38 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:32:29.859 19:34:38 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:32:29.859 19:34:38 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:32:29.859 19:34:38 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:32:29.859 19:34:38 -- scripts/common.sh@336 -- $ IFS=.-: 00:32:29.859 19:34:38 -- scripts/common.sh@336 -- $ read -ra ver1 00:32:29.859 19:34:38 -- scripts/common.sh@337 -- $ IFS=.-: 00:32:29.859 19:34:38 -- scripts/common.sh@337 -- $ read -ra ver2 00:32:29.859 19:34:38 -- scripts/common.sh@338 -- $ local 'op=<' 00:32:29.859 19:34:38 -- scripts/common.sh@340 -- $ ver1_l=2 00:32:29.859 19:34:38 -- scripts/common.sh@341 -- $ ver2_l=1 00:32:29.859 19:34:38 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:32:29.859 19:34:38 -- scripts/common.sh@344 -- $ case "$op" in 00:32:29.859 19:34:38 -- scripts/common.sh@345 -- $ : 1 00:32:29.859 19:34:38 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:32:29.859 19:34:38 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:29.859 19:34:38 -- scripts/common.sh@365 -- $ decimal 1 00:32:29.859 19:34:38 -- scripts/common.sh@353 -- $ local d=1 00:32:29.859 19:34:38 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:32:29.859 19:34:38 -- scripts/common.sh@355 -- $ echo 1 00:32:29.859 19:34:38 -- scripts/common.sh@365 -- $ ver1[v]=1 00:32:29.859 19:34:38 -- scripts/common.sh@366 -- $ decimal 2 00:32:29.859 19:34:38 -- scripts/common.sh@353 -- $ local d=2 00:32:29.859 19:34:38 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:32:29.859 19:34:38 -- scripts/common.sh@355 -- $ echo 2 00:32:29.859 19:34:38 -- scripts/common.sh@366 -- $ ver2[v]=2 00:32:29.859 19:34:38 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:32:29.859 19:34:38 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:32:29.859 19:34:38 -- scripts/common.sh@368 -- $ return 0 00:32:29.859 19:34:38 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:29.859 19:34:38 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:32:29.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.859 --rc genhtml_branch_coverage=1 00:32:29.859 --rc genhtml_function_coverage=1 00:32:29.859 --rc genhtml_legend=1 00:32:29.859 --rc geninfo_all_blocks=1 00:32:29.859 --rc geninfo_unexecuted_blocks=1 00:32:29.859 00:32:29.859 ' 00:32:29.859 19:34:38 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:32:29.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.859 --rc genhtml_branch_coverage=1 00:32:29.859 --rc genhtml_function_coverage=1 00:32:29.859 --rc genhtml_legend=1 00:32:29.859 --rc geninfo_all_blocks=1 00:32:29.859 --rc geninfo_unexecuted_blocks=1 00:32:29.859 00:32:29.859 ' 00:32:29.859 19:34:38 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:32:29.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.859 --rc genhtml_branch_coverage=1 00:32:29.859 --rc genhtml_function_coverage=1 00:32:29.859 --rc genhtml_legend=1 00:32:29.859 --rc geninfo_all_blocks=1 00:32:29.859 --rc geninfo_unexecuted_blocks=1 00:32:29.859 00:32:29.859 ' 00:32:29.859 19:34:38 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:32:29.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.859 --rc genhtml_branch_coverage=1 00:32:29.859 --rc genhtml_function_coverage=1 00:32:29.859 --rc genhtml_legend=1 00:32:29.859 --rc geninfo_all_blocks=1 00:32:29.859 --rc geninfo_unexecuted_blocks=1 00:32:29.859 00:32:29.859 ' 00:32:29.859 19:34:38 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:29.859 19:34:38 -- scripts/common.sh@15 -- $ shopt -s extglob 00:32:29.859 19:34:38 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:29.859 19:34:38 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:29.859 19:34:38 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:29.859 19:34:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.859 19:34:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.859 19:34:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.859 19:34:38 -- paths/export.sh@5 -- $ export PATH 00:32:29.859 19:34:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.859 19:34:38 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:32:29.859 19:34:38 -- common/autobuild_common.sh@486 -- $ date +%s 00:32:29.860 19:34:38 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729193678.XXXXXX 00:32:29.860 19:34:38 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729193678.y0RCdJ 00:32:29.860 19:34:38 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:32:29.860 19:34:38 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:32:29.860 19:34:38 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:32:29.860 19:34:38 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:32:29.860 19:34:38 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:32:29.860 19:34:38 -- common/autobuild_common.sh@502 -- $ get_config_params 00:32:29.860 19:34:38 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:32:29.860 19:34:38 -- common/autotest_common.sh@10 -- $ set +x 00:32:29.860 19:34:38 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:32:29.860 19:34:38 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:32:29.860 19:34:38 -- pm/common@17 -- $ local monitor 00:32:29.860 19:34:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:29.860 19:34:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:29.860 19:34:38 -- pm/common@25 -- $ sleep 1 00:32:29.860 19:34:38 -- pm/common@21 -- $ date +%s 00:32:29.860 19:34:38 -- pm/common@21 -- $ date +%s 00:32:29.860 19:34:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1729193678 00:32:29.860 19:34:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1729193678 00:32:29.860 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1729193678_collect-cpu-load.pm.log 00:32:29.860 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1729193678_collect-vmstat.pm.log 00:32:30.794 19:34:39 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:32:30.794 19:34:39 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:32:30.794 19:34:39 -- spdk/autopackage.sh@14 -- $ timing_finish 00:32:30.794 19:34:39 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:30.794 19:34:39 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:32:30.794 19:34:39 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:30.794 19:34:39 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:32:30.794 19:34:39 -- pm/common@29 -- $ signal_monitor_resources TERM 00:32:30.794 19:34:39 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:32:30.794 19:34:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:30.794 19:34:39 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:32:30.794 19:34:39 -- pm/common@44 -- $ pid=87603 00:32:30.794 19:34:39 -- pm/common@50 -- $ kill -TERM 87603 00:32:30.794 19:34:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:30.794 19:34:39 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:32:30.794 19:34:39 -- pm/common@44 -- $ pid=87604 00:32:30.794 19:34:39 -- pm/common@50 -- $ kill -TERM 87604 00:32:30.794 + [[ -n 5258 ]] 00:32:30.794 + sudo kill 5258 00:32:30.803 [Pipeline] } 00:32:30.819 [Pipeline] // timeout 00:32:30.824 [Pipeline] } 00:32:30.837 [Pipeline] // stage 00:32:30.842 [Pipeline] } 00:32:30.856 [Pipeline] // catchError 00:32:30.864 [Pipeline] stage 00:32:30.867 [Pipeline] { (Stop VM) 00:32:30.878 [Pipeline] sh 00:32:31.157 + vagrant halt 00:32:35.341 ==> default: Halting domain... 00:32:41.909 [Pipeline] sh 00:32:42.186 + vagrant destroy -f 00:32:46.365 ==> default: Removing domain... 00:32:46.377 [Pipeline] sh 00:32:46.655 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:32:46.663 [Pipeline] } 00:32:46.677 [Pipeline] // stage 00:32:46.683 [Pipeline] } 00:32:46.697 [Pipeline] // dir 00:32:46.702 [Pipeline] } 00:32:46.716 [Pipeline] // wrap 00:32:46.722 [Pipeline] } 00:32:46.735 [Pipeline] // catchError 00:32:46.744 [Pipeline] stage 00:32:46.747 [Pipeline] { (Epilogue) 00:32:46.760 [Pipeline] sh 00:32:47.097 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:53.664 [Pipeline] catchError 00:32:53.667 [Pipeline] { 00:32:53.680 [Pipeline] sh 00:32:53.969 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:54.227 Artifacts sizes are good 00:32:54.236 [Pipeline] } 00:32:54.252 [Pipeline] // catchError 00:32:54.263 [Pipeline] archiveArtifacts 00:32:54.271 Archiving artifacts 00:32:54.395 [Pipeline] cleanWs 00:32:54.411 [WS-CLEANUP] Deleting project workspace... 00:32:54.411 [WS-CLEANUP] Deferred wipeout is used... 00:32:54.442 [WS-CLEANUP] done 00:32:54.444 [Pipeline] } 00:32:54.461 [Pipeline] // stage 00:32:54.466 [Pipeline] } 00:32:54.480 [Pipeline] // node 00:32:54.485 [Pipeline] End of Pipeline 00:32:54.523 Finished: SUCCESS